Jan 16 20:20:18 localhost kernel: Linux version 5.14.0-284.36.1.el9_2.x86_64 (mockbuild@x86-64-01.build.eng.rdu2.redhat.com) (gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), GNU ld version 2.35.2-37.el9) #1 SMP PREEMPT_DYNAMIC Thu Oct 5 08:11:31 EDT 2023 Jan 16 20:20:18 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com. Jan 16 20:20:18 localhost kernel: Command line: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-752a3b0ead0e52e830bbd3e504c11b2606df057c884ebbf46dfca691978a83ed/vmlinuz-5.14.0-284.36.1.el9_2.x86_64 ignition.platform.id=qemu console=tty0 console=ttyS0,115200n8 ignition.firstboot ostree=/ostree/boot.1/rhcos/752a3b0ead0e52e830bbd3e504c11b2606df057c884ebbf46dfca691978a83ed/0 Jan 16 20:20:18 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 16 20:20:18 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 16 20:20:18 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 16 20:20:18 localhost kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 16 20:20:18 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 16 20:20:18 localhost kernel: signal: max sigframe size: 1776 Jan 16 20:20:18 localhost kernel: BIOS-provided physical RAM map: Jan 16 20:20:18 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 16 20:20:18 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 16 20:20:18 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 16 20:20:18 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable Jan 16 20:20:18 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved Jan 16 20:20:18 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 16 20:20:18 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 16 20:20:18 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x00000001bfffffff] usable Jan 16 20:20:18 localhost kernel: NX (Execute Disable) protection: active Jan 16 20:20:18 localhost kernel: SMBIOS 2.8 present. Jan 16 20:20:18 localhost kernel: DMI: Red Hat KVM, BIOS 1.16.1-1.el9 04/01/2014 Jan 16 20:20:18 localhost kernel: Hypervisor detected: KVM Jan 16 20:20:18 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 16 20:20:18 localhost kernel: kvm-clock: using sched offset of 16763906235 cycles Jan 16 20:20:18 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 16 20:20:18 localhost kernel: tsc: Detected 1999.998 MHz processor Jan 16 20:20:18 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 16 20:20:18 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 16 20:20:18 localhost kernel: last_pfn = 0x1c0000 max_arch_pfn = 0x400000000 Jan 16 20:20:18 localhost kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 16 20:20:18 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000 Jan 16 20:20:18 localhost kernel: found SMP MP-table at [mem 0x000f5b50-0x000f5b5f] Jan 16 20:20:18 localhost kernel: Using GB pages for direct mapping Jan 16 20:20:18 localhost kernel: RAMDISK: [mem 0x2d0b2000-0x32850fff] Jan 16 20:20:18 localhost kernel: ACPI: Early table checksum verification disabled Jan 16 20:20:18 localhost kernel: ACPI: RSDP 0x00000000000F5B10 000014 (v00 BOCHS ) Jan 16 20:20:18 localhost kernel: ACPI: RSDT 0x00000000BFFE1870 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 20:20:18 localhost kernel: ACPI: FACP 0x00000000BFFE1744 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 20:20:18 localhost kernel: ACPI: DSDT 0x00000000BFFDFD40 001A04 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 20:20:18 localhost kernel: ACPI: FACS 0x00000000BFFDFD00 000040 Jan 16 20:20:18 localhost kernel: ACPI: APIC 0x00000000BFFE17B8 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 20:20:18 localhost kernel: ACPI: WAET 0x00000000BFFE1848 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 20:20:18 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1744-0xbffe17b7] Jan 16 20:20:18 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfd40-0xbffe1743] Jan 16 20:20:18 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfd00-0xbffdfd3f] Jan 16 20:20:18 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe17b8-0xbffe1847] Jan 16 20:20:18 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1848-0xbffe186f] Jan 16 20:20:18 localhost kernel: No NUMA configuration found Jan 16 20:20:18 localhost kernel: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Jan 16 20:20:18 localhost kernel: NODE_DATA(0) allocated [mem 0x1bffd5000-0x1bfffffff] Jan 16 20:20:18 localhost kernel: Zone ranges: Jan 16 20:20:18 localhost kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 16 20:20:18 localhost kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 16 20:20:18 localhost kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 16 20:20:18 localhost kernel: Device empty Jan 16 20:20:18 localhost kernel: Movable zone start for each node Jan 16 20:20:18 localhost kernel: Early memory node ranges Jan 16 20:20:18 localhost kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 16 20:20:18 localhost kernel: node 0: [mem 0x0000000000100000-0x00000000bffdafff] Jan 16 20:20:18 localhost kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 16 20:20:18 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000001bfffffff] Jan 16 20:20:18 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 16 20:20:18 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 16 20:20:18 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges Jan 16 20:20:18 localhost kernel: ACPI: PM-Timer IO Port: 0x608 Jan 16 20:20:18 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 16 20:20:18 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 16 20:20:18 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 16 20:20:18 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 16 20:20:18 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 16 20:20:18 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 16 20:20:18 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 16 20:20:18 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 16 20:20:18 localhost kernel: TSC deadline timer available Jan 16 20:20:18 localhost kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 16 20:20:18 localhost kernel: kvm-guest: KVM setup pv remote TLB flush Jan 16 20:20:18 localhost kernel: kvm-guest: setup PV sched yield Jan 16 20:20:18 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff] Jan 16 20:20:18 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff] Jan 16 20:20:18 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff] Jan 16 20:20:18 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff] Jan 16 20:20:18 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff] Jan 16 20:20:18 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff] Jan 16 20:20:18 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] Jan 16 20:20:18 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff] Jan 16 20:20:18 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff] Jan 16 20:20:18 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 16 20:20:18 localhost kernel: Booting paravirtualized kernel on KVM Jan 16 20:20:18 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 16 20:20:18 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 16 20:20:18 localhost kernel: percpu: Embedded 55 pages/cpu s188416 r8192 d28672 u524288 Jan 16 20:20:18 localhost kernel: pcpu-alloc: s188416 r8192 d28672 u524288 alloc=1*2097152 Jan 16 20:20:18 localhost kernel: pcpu-alloc: [0] 0 1 2 3 Jan 16 20:20:18 localhost kernel: kvm-guest: PV spinlocks enabled Jan 16 20:20:18 localhost kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 16 20:20:18 localhost kernel: Fallback order for Node 0: 0 Jan 16 20:20:18 localhost kernel: Built 1 zonelists, mobility grouping on. Total pages: 1547995 Jan 16 20:20:18 localhost kernel: Policy zone: Normal Jan 16 20:20:18 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-752a3b0ead0e52e830bbd3e504c11b2606df057c884ebbf46dfca691978a83ed/vmlinuz-5.14.0-284.36.1.el9_2.x86_64 ignition.platform.id=qemu console=tty0 console=ttyS0,115200n8 ignition.firstboot ostree=/ostree/boot.1/rhcos/752a3b0ead0e52e830bbd3e504c11b2606df057c884ebbf46dfca691978a83ed/0 Jan 16 20:20:18 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-752a3b0ead0e52e830bbd3e504c11b2606df057c884ebbf46dfca691978a83ed/vmlinuz-5.14.0-284.36.1.el9_2.x86_64 ostree=/ostree/boot.1/rhcos/752a3b0ead0e52e830bbd3e504c11b2606df057c884ebbf46dfca691978a83ed/0", will be passed to user space. Jan 16 20:20:18 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 16 20:20:18 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 16 20:20:18 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 16 20:20:18 localhost kernel: software IO TLB: area num 4. Jan 16 20:20:18 localhost kernel: Memory: 3120108K/6290916K available (14342K kernel code, 5532K rwdata, 10180K rodata, 2788K init, 19820K bss, 329888K reserved, 0K cma-reserved) Jan 16 20:20:18 localhost kernel: random: get_random_u64 called from kmem_cache_open+0x1e/0x210 with crng_init=0 Jan 16 20:20:18 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 16 20:20:18 localhost kernel: Kernel/User page tables isolation: enabled Jan 16 20:20:18 localhost kernel: ftrace: allocating 44791 entries in 175 pages Jan 16 20:20:18 localhost kernel: ftrace: allocated 175 pages with 6 groups Jan 16 20:20:18 localhost kernel: Dynamic Preempt: voluntary Jan 16 20:20:18 localhost kernel: rcu: Preemptible hierarchical RCU implementation. Jan 16 20:20:18 localhost kernel: rcu: RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=4. Jan 16 20:20:18 localhost kernel: Trampoline variant of Tasks RCU enabled. Jan 16 20:20:18 localhost kernel: Rude variant of Tasks RCU enabled. Jan 16 20:20:18 localhost kernel: Tracing variant of Tasks RCU enabled. Jan 16 20:20:18 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 16 20:20:18 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 16 20:20:18 localhost kernel: NR_IRQS: 524544, nr_irqs: 456, preallocated irqs: 16 Jan 16 20:20:18 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 16 20:20:18 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____) Jan 16 20:20:18 localhost kernel: Console: colour VGA+ 80x25 Jan 16 20:20:18 localhost kernel: printk: console [tty0] enabled Jan 16 20:20:18 localhost kernel: printk: console [ttyS0] enabled Jan 16 20:20:18 localhost kernel: ACPI: Core revision 20211217 Jan 16 20:20:18 localhost kernel: APIC: Switch to symmetric I/O mode setup Jan 16 20:20:18 localhost kernel: x2apic enabled Jan 16 20:20:18 localhost kernel: Switched APIC routing to physical x2apic. Jan 16 20:20:18 localhost kernel: kvm-guest: setup PV IPIs Jan 16 20:20:18 localhost kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a8595ce59, max_idle_ns: 881590778713 ns Jan 16 20:20:18 localhost kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999998) Jan 16 20:20:18 localhost kernel: pid_max: default: 32768 minimum: 301 Jan 16 20:20:18 localhost kernel: LSM: Security Framework initializing Jan 16 20:20:18 localhost kernel: Yama: becoming mindful. Jan 16 20:20:18 localhost kernel: SELinux: Initializing. Jan 16 20:20:18 localhost kernel: LSM support for eBPF active Jan 16 20:20:18 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 16 20:20:18 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 16 20:20:18 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 16 20:20:18 localhost kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 16 20:20:18 localhost kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 16 20:20:18 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 16 20:20:18 localhost kernel: Spectre V2 : Mitigation: Retpolines Jan 16 20:20:18 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 16 20:20:18 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 16 20:20:18 localhost kernel: Speculative Store Bypass: Vulnerable Jan 16 20:20:18 localhost kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 16 20:20:18 localhost kernel: MMIO Stale Data: Unknown: No mitigations Jan 16 20:20:18 localhost kernel: Freeing SMP alternatives memory: 36K Jan 16 20:20:18 localhost kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz (family: 0x6, model: 0x2d, stepping: 0x7) Jan 16 20:20:18 localhost kernel: cblist_init_generic: Setting adjustable number of callback queues. Jan 16 20:20:18 localhost kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jan 16 20:20:18 localhost kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jan 16 20:20:18 localhost kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jan 16 20:20:18 localhost kernel: Performance Events: SandyBridge events, full-width counters, Intel PMU driver. Jan 16 20:20:18 localhost kernel: ... version: 2 Jan 16 20:20:18 localhost kernel: ... bit width: 48 Jan 16 20:20:18 localhost kernel: ... generic registers: 4 Jan 16 20:20:18 localhost kernel: ... value mask: 0000ffffffffffff Jan 16 20:20:18 localhost kernel: ... max period: 00007fffffffffff Jan 16 20:20:18 localhost kernel: ... fixed-purpose events: 3 Jan 16 20:20:18 localhost kernel: ... event mask: 000000070000000f Jan 16 20:20:18 localhost kernel: rcu: Hierarchical SRCU implementation. Jan 16 20:20:18 localhost kernel: rcu: Max phase no-delay instances is 400. Jan 16 20:20:18 localhost kernel: smp: Bringing up secondary CPUs ... Jan 16 20:20:18 localhost kernel: x86: Booting SMP configuration: Jan 16 20:20:18 localhost kernel: .... node #0, CPUs: #1 Jan 16 20:20:18 localhost kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 16 20:20:18 localhost kernel: #2 Jan 16 20:20:18 localhost kernel: smpboot: CPU 2 Converting physical 0 to logical die 2 Jan 16 20:20:18 localhost kernel: #3 Jan 16 20:20:18 localhost kernel: smpboot: CPU 3 Converting physical 0 to logical die 3 Jan 16 20:20:18 localhost kernel: smp: Brought up 1 node, 4 CPUs Jan 16 20:20:18 localhost kernel: smpboot: Max logical packages: 4 Jan 16 20:20:18 localhost kernel: smpboot: Total of 4 processors activated (15999.98 BogoMIPS) Jan 16 20:20:18 localhost kernel: node 0 deferred pages initialised in 11ms Jan 16 20:20:18 localhost kernel: devtmpfs: initialized Jan 16 20:20:18 localhost kernel: x86/mm: Memory block size: 128MB Jan 16 20:20:18 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 16 20:20:18 localhost kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 16 20:20:18 localhost kernel: pinctrl core: initialized pinctrl subsystem Jan 16 20:20:18 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 16 20:20:18 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations Jan 16 20:20:18 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 16 20:20:18 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 16 20:20:18 localhost kernel: audit: initializing netlink subsys (disabled) Jan 16 20:20:18 localhost kernel: audit: type=2000 audit(1705436404.315:1): state=initialized audit_enabled=0 res=1 Jan 16 20:20:18 localhost kernel: thermal_sys: Registered thermal governor 'fair_share' Jan 16 20:20:18 localhost kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 16 20:20:18 localhost kernel: thermal_sys: Registered thermal governor 'user_space' Jan 16 20:20:18 localhost kernel: cpuidle: using governor menu Jan 16 20:20:18 localhost kernel: HugeTLB: can optimize 4095 vmemmap pages for hugepages-1048576kB Jan 16 20:20:18 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 16 20:20:18 localhost kernel: PCI: Using configuration type 1 for base access Jan 16 20:20:18 localhost kernel: core: PMU erratum BJ122, BV98, HSD29 workaround disabled, HT off Jan 16 20:20:18 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 16 20:20:18 localhost kernel: HugeTLB: can optimize 7 vmemmap pages for hugepages-2048kB Jan 16 20:20:18 localhost kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jan 16 20:20:18 localhost kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jan 16 20:20:18 localhost kernel: cryptd: max_cpu_qlen set to 1000 Jan 16 20:20:18 localhost kernel: ACPI: Added _OSI(Module Device) Jan 16 20:20:18 localhost kernel: ACPI: Added _OSI(Processor Device) Jan 16 20:20:18 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 16 20:20:18 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 16 20:20:18 localhost kernel: ACPI: Added _OSI(Linux-Dell-Video) Jan 16 20:20:18 localhost kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jan 16 20:20:18 localhost kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jan 16 20:20:18 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 16 20:20:18 localhost kernel: ACPI: Interpreter enabled Jan 16 20:20:18 localhost kernel: ACPI: PM: (supports S0 S5) Jan 16 20:20:18 localhost kernel: ACPI: Using IOAPIC for interrupt routing Jan 16 20:20:18 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 16 20:20:18 localhost kernel: PCI: Using E820 reservations for host bridge windows Jan 16 20:20:18 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 16 20:20:18 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 16 20:20:18 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI EDR HPX-Type3] Jan 16 20:20:18 localhost kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 16 20:20:18 localhost kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Jan 16 20:20:18 localhost kernel: acpiphp: Slot [3] registered Jan 16 20:20:18 localhost kernel: acpiphp: Slot [4] registered Jan 16 20:20:18 localhost kernel: acpiphp: Slot [5] registered Jan 16 20:20:18 localhost kernel: acpiphp: Slot [6] registered Jan 16 20:20:18 localhost kernel: acpiphp: Slot [7] registered Jan 16 20:20:18 localhost kernel: acpiphp: Slot [8] registered Jan 16 20:20:18 localhost kernel: acpiphp: Slot [9] registered Jan 16 20:20:18 localhost kernel: acpiphp: Slot [10] registered Jan 16 20:20:18 localhost kernel: acpiphp: Slot [11] registered Jan 16 20:20:18 localhost kernel: acpiphp: Slot [12] registered Jan 16 20:20:18 localhost kernel: acpiphp: Slot [13] registered Jan 16 20:20:18 localhost kernel: acpiphp: Slot [14] registered Jan 16 20:20:18 localhost kernel: acpiphp: Slot [15] registered Jan 16 20:20:18 localhost kernel: acpiphp: Slot [16] registered Jan 16 20:20:18 localhost kernel: acpiphp: Slot [17] registered Jan 16 20:20:18 localhost kernel: acpiphp: Slot [18] registered Jan 16 20:20:18 localhost kernel: acpiphp: Slot [19] registered Jan 16 20:20:18 localhost kernel: acpiphp: Slot [20] registered Jan 16 20:20:18 localhost kernel: acpiphp: Slot [21] registered Jan 16 20:20:18 localhost kernel: acpiphp: Slot [22] registered Jan 16 20:20:18 localhost kernel: acpiphp: Slot [23] registered Jan 16 20:20:18 localhost kernel: acpiphp: Slot [24] registered Jan 16 20:20:18 localhost kernel: acpiphp: Slot [25] registered Jan 16 20:20:18 localhost kernel: acpiphp: Slot [26] registered Jan 16 20:20:18 localhost kernel: acpiphp: Slot [27] registered Jan 16 20:20:18 localhost kernel: acpiphp: Slot [28] registered Jan 16 20:20:18 localhost kernel: acpiphp: Slot [29] registered Jan 16 20:20:18 localhost kernel: acpiphp: Slot [30] registered Jan 16 20:20:18 localhost kernel: acpiphp: Slot [31] registered Jan 16 20:20:18 localhost kernel: PCI host bridge to bus 0000:00 Jan 16 20:20:18 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 16 20:20:18 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 16 20:20:18 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 16 20:20:18 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 16 20:20:18 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x1c0000000-0x23fffffff window] Jan 16 20:20:18 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 16 20:20:18 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 16 20:20:18 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 16 20:20:18 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 16 20:20:18 localhost kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 16 20:20:18 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 16 20:20:18 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 16 20:20:18 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 16 20:20:18 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 16 20:20:18 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 16 20:20:18 localhost kernel: pci 0000:00:01.2: reg 0x20: [io 0xc080-0xc09f] Jan 16 20:20:18 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 16 20:20:18 localhost kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 16 20:20:18 localhost kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 16 20:20:18 localhost kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x170 took 59570 usecs Jan 16 20:20:18 localhost kernel: pci 0000:00:02.0: [1013:00b8] type 00 class 0x030000 Jan 16 20:20:18 localhost kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfc000000-0xfdffffff pref] Jan 16 20:20:18 localhost kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd0000-0xfebd0fff] Jan 16 20:20:18 localhost kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 16 20:20:18 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 16 20:20:18 localhost kernel: pci 0000:00:02.0: pci_fixup_video+0x0/0xe0 took 40039 usecs Jan 16 20:20:18 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 16 20:20:18 localhost kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Jan 16 20:20:18 localhost kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 16 20:20:18 localhost kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 16 20:20:18 localhost kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb40000-0xfeb7ffff pref] Jan 16 20:20:18 localhost kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 16 20:20:18 localhost kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 16 20:20:18 localhost kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 16 20:20:18 localhost kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 16 20:20:18 localhost kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 16 20:20:18 localhost kernel: pci 0000:00:05.0: [1af4:1003] type 00 class 0x078000 Jan 16 20:20:18 localhost kernel: pci 0000:00:05.0: reg 0x10: [io 0xc000-0xc03f] Jan 16 20:20:18 localhost kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 16 20:20:18 localhost kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 16 20:20:18 localhost kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 16 20:20:18 localhost kernel: pci 0000:00:06.0: reg 0x10: [io 0xc040-0xc07f] Jan 16 20:20:18 localhost kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebd4000-0xfebd4fff] Jan 16 20:20:18 localhost kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe00c000-0xfe00ffff 64bit pref] Jan 16 20:20:18 localhost kernel: pci 0000:00:07.0: [1af4:1002] type 00 class 0x00ff00 Jan 16 20:20:18 localhost kernel: pci 0000:00:07.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 16 20:20:18 localhost kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe010000-0xfe013fff 64bit pref] Jan 16 20:20:18 localhost kernel: pci 0000:00:08.0: [1af4:1005] type 00 class 0x00ff00 Jan 16 20:20:18 localhost kernel: pci 0000:00:08.0: reg 0x10: [io 0xc100-0xc11f] Jan 16 20:20:18 localhost kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe014000-0xfe017fff 64bit pref] Jan 16 20:20:18 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 16 20:20:18 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 16 20:20:18 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 16 20:20:18 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 16 20:20:18 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 16 20:20:18 localhost kernel: iommu: Default domain type: Translated Jan 16 20:20:18 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 16 20:20:18 localhost kernel: SCSI subsystem initialized Jan 16 20:20:18 localhost kernel: ACPI: bus type USB registered Jan 16 20:20:18 localhost kernel: usbcore: registered new interface driver usbfs Jan 16 20:20:18 localhost kernel: usbcore: registered new interface driver hub Jan 16 20:20:18 localhost kernel: usbcore: registered new device driver usb Jan 16 20:20:18 localhost kernel: pps_core: LinuxPPS API ver. 1 registered Jan 16 20:20:18 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 16 20:20:18 localhost kernel: PTP clock support registered Jan 16 20:20:18 localhost kernel: EDAC MC: Ver: 3.0.0 Jan 16 20:20:18 localhost kernel: NetLabel: Initializing Jan 16 20:20:18 localhost kernel: NetLabel: domain hash size = 128 Jan 16 20:20:18 localhost kernel: NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO Jan 16 20:20:18 localhost kernel: NetLabel: unlabeled traffic allowed by default Jan 16 20:20:18 localhost kernel: PCI: Using ACPI for IRQ routing Jan 16 20:20:18 localhost kernel: PCI: pci_cache_line_size set to 64 bytes Jan 16 20:20:18 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 16 20:20:18 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff] Jan 16 20:20:18 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 16 20:20:18 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 16 20:20:19 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 16 20:20:19 localhost kernel: vgaarb: loaded Jan 16 20:20:19 localhost kernel: clocksource: Switched to clocksource kvm-clock Jan 16 20:20:19 localhost kernel: VFS: Disk quotas dquot_6.6.0 Jan 16 20:20:19 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 16 20:20:19 localhost kernel: pnp: PnP ACPI init Jan 16 20:20:19 localhost kernel: pnp 00:03: [dma 2] Jan 16 20:20:19 localhost kernel: pnp: PnP ACPI: found 5 devices Jan 16 20:20:19 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 16 20:20:19 localhost kernel: NET: Registered PF_INET protocol family Jan 16 20:20:19 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 16 20:20:19 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 16 20:20:19 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 16 20:20:19 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 16 20:20:19 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Jan 16 20:20:19 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 16 20:20:19 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear) Jan 16 20:20:19 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 16 20:20:19 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 16 20:20:19 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 16 20:20:19 localhost kernel: NET: Registered PF_XDP protocol family Jan 16 20:20:19 localhost kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 16 20:20:19 localhost kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 16 20:20:19 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 16 20:20:19 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 16 20:20:19 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x1c0000000-0x23fffffff window] Jan 16 20:20:19 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 16 20:20:19 localhost kernel: pci 0000:00:00.0: quirk_passive_release+0x0/0x80 took 26962 usecs Jan 16 20:20:19 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 16 20:20:19 localhost kernel: pci 0000:00:00.0: quirk_natoma+0x0/0x20 took 27068 usecs Jan 16 20:20:19 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 16 20:20:19 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 141250 usecs Jan 16 20:20:19 localhost kernel: PCI: CLS 0 bytes, default 64 Jan 16 20:20:19 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 16 20:20:19 localhost kernel: Trying to unpack rootfs image as initramfs... Jan 16 20:20:19 localhost kernel: software IO TLB: mapped [mem 0x00000000bbfdb000-0x00000000bffdb000] (64MB) Jan 16 20:20:19 localhost kernel: ACPI: bus type thunderbolt registered Jan 16 20:20:19 localhost kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a8595ce59, max_idle_ns: 881590778713 ns Jan 16 20:20:19 localhost kernel: Initialise system trusted keyrings Jan 16 20:20:19 localhost kernel: Key type blacklist registered Jan 16 20:20:19 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0 Jan 16 20:20:19 localhost kernel: zbud: loaded Jan 16 20:20:19 localhost kernel: integrity: Platform Keyring initialized Jan 16 20:20:19 localhost kernel: NET: Registered PF_ALG protocol family Jan 16 20:20:19 localhost kernel: xor: automatically using best checksumming function avx Jan 16 20:20:19 localhost kernel: Key type asymmetric registered Jan 16 20:20:19 localhost kernel: Asymmetric key parser 'x509' registered Jan 16 20:20:19 localhost kernel: Running certificate verification selftests Jan 16 20:20:19 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db' Jan 16 20:20:19 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246) Jan 16 20:20:19 localhost kernel: io scheduler mq-deadline registered Jan 16 20:20:19 localhost kernel: io scheduler kyber registered Jan 16 20:20:19 localhost kernel: io scheduler bfq registered Jan 16 20:20:19 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE Jan 16 20:20:19 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 Jan 16 20:20:19 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 Jan 16 20:20:19 localhost kernel: ACPI: button: Power Button [PWRF] Jan 16 20:20:19 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 10 Jan 16 20:20:19 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 16 20:20:19 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 11 Jan 16 20:20:19 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 16 20:20:19 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 16 20:20:19 localhost kernel: Non-volatile memory driver v1.3 Jan 16 20:20:19 localhost kernel: random: crng init done Jan 16 20:20:19 localhost kernel: rdac: device handler registered Jan 16 20:20:19 localhost kernel: hp_sw: device handler registered Jan 16 20:20:19 localhost kernel: emc: device handler registered Jan 16 20:20:19 localhost kernel: alua: device handler registered Jan 16 20:20:19 localhost kernel: libphy: Fixed MDIO Bus: probed Jan 16 20:20:19 localhost kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Jan 16 20:20:19 localhost kernel: Freeing initrd memory: 89724K Jan 16 20:20:19 localhost kernel: ehci-pci: EHCI PCI platform driver Jan 16 20:20:19 localhost kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver Jan 16 20:20:19 localhost kernel: ohci-pci: OHCI PCI platform driver Jan 16 20:20:19 localhost kernel: uhci_hcd: USB Universal Host Controller Interface driver Jan 16 20:20:19 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 16 20:20:19 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 16 20:20:19 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 16 20:20:19 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c080 Jan 16 20:20:19 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14 Jan 16 20:20:19 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Jan 16 20:20:19 localhost kernel: usb usb1: Product: UHCI Host Controller Jan 16 20:20:19 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-284.36.1.el9_2.x86_64 uhci_hcd Jan 16 20:20:19 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2 Jan 16 20:20:19 localhost kernel: hub 1-0:1.0: USB hub found Jan 16 20:20:19 localhost kernel: hub 1-0:1.0: 2 ports detected Jan 16 20:20:19 localhost kernel: usbcore: registered new interface driver usbserial_generic Jan 16 20:20:19 localhost kernel: usbserial: USB Serial support registered for generic Jan 16 20:20:19 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 16 20:20:19 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 16 20:20:19 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 16 20:20:19 localhost kernel: mousedev: PS/2 mouse device common for all mice Jan 16 20:20:19 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 16 20:20:19 localhost kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 16 20:20:19 localhost kernel: rtc_cmos 00:04: registered as rtc0 Jan 16 20:20:19 localhost kernel: rtc_cmos 00:04: setting system clock to 2024-01-16T20:20:14 UTC (1705436414) Jan 16 20:20:19 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4 Jan 16 20:20:19 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 16 20:20:19 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3 Jan 16 20:20:19 localhost kernel: intel_pstate: CPU model not supported Jan 16 20:20:19 localhost kernel: hid: raw HID events driver (C) Jiri Kosina Jan 16 20:20:19 localhost kernel: usbcore: registered new interface driver usbhid Jan 16 20:20:19 localhost kernel: usbhid: USB HID core driver Jan 16 20:20:19 localhost kernel: drop_monitor: Initializing network drop monitor service Jan 16 20:20:19 localhost kernel: Initializing XFRM netlink socket Jan 16 20:20:19 localhost kernel: NET: Registered PF_INET6 protocol family Jan 16 20:20:19 localhost kernel: Segment Routing with IPv6 Jan 16 20:20:19 localhost kernel: NET: Registered PF_PACKET protocol family Jan 16 20:20:19 localhost kernel: mpls_gso: MPLS GSO support Jan 16 20:20:19 localhost kernel: IPI shorthand broadcast: enabled Jan 16 20:20:19 localhost kernel: AVX version of gcm_enc/dec engaged. Jan 16 20:20:19 localhost kernel: AES CTR mode by8 optimization enabled Jan 16 20:20:19 localhost kernel: registered taskstats version 1 Jan 16 20:20:19 localhost kernel: Loading compiled-in X.509 certificates Jan 16 20:20:19 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: 863beefe1034e737741775482ecdd49a6b8e727c' Jan 16 20:20:19 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80' Jan 16 20:20:19 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8' Jan 16 20:20:19 localhost kernel: zswap: loaded using pool lzo/zbud Jan 16 20:20:19 localhost kernel: page_owner is disabled Jan 16 20:20:19 localhost kernel: Key type big_key registered Jan 16 20:20:19 localhost kernel: Key type encrypted registered Jan 16 20:20:19 localhost kernel: ima: No TPM chip found, activating TPM-bypass! Jan 16 20:20:19 localhost kernel: Loading compiled-in module X.509 certificates Jan 16 20:20:19 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: 863beefe1034e737741775482ecdd49a6b8e727c' Jan 16 20:20:19 localhost kernel: ima: Allocated hash algorithm: sha256 Jan 16 20:20:19 localhost kernel: ima: No architecture policies found Jan 16 20:20:19 localhost kernel: evm: Initialising EVM extended attributes: Jan 16 20:20:19 localhost kernel: evm: security.selinux Jan 16 20:20:19 localhost kernel: evm: security.SMACK64 (disabled) Jan 16 20:20:19 localhost kernel: evm: security.SMACK64EXEC (disabled) Jan 16 20:20:19 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled) Jan 16 20:20:19 localhost kernel: evm: security.SMACK64MMAP (disabled) Jan 16 20:20:19 localhost kernel: evm: security.apparmor (disabled) Jan 16 20:20:19 localhost kernel: evm: security.ima Jan 16 20:20:19 localhost kernel: evm: security.capability Jan 16 20:20:19 localhost kernel: evm: HMAC attrs: 0x1 Jan 16 20:20:19 localhost kernel: Unstable clock detected, switching default tracing clock to "global" If you want to keep using the local clock, then add: "trace_clock=local" on the kernel command line Jan 16 20:20:19 localhost kernel: Freeing unused decrypted memory: 2036K Jan 16 20:20:19 localhost kernel: Freeing unused kernel image (initmem) memory: 2788K Jan 16 20:20:19 localhost kernel: Write protecting the kernel read-only data: 26624k Jan 16 20:20:19 localhost kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jan 16 20:20:19 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 60K Jan 16 20:20:19 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found. Jan 16 20:20:19 localhost kernel: x86/mm: Checking user space page tables Jan 16 20:20:19 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found. Jan 16 20:20:19 localhost kernel: Run /init as init process Jan 16 20:20:19 localhost kernel: with arguments: Jan 16 20:20:19 localhost kernel: /init Jan 16 20:20:19 localhost kernel: with environment: Jan 16 20:20:19 localhost kernel: HOME=/ Jan 16 20:20:19 localhost kernel: TERM=linux Jan 16 20:20:19 localhost kernel: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-752a3b0ead0e52e830bbd3e504c11b2606df057c884ebbf46dfca691978a83ed/vmlinuz-5.14.0-284.36.1.el9_2.x86_64 Jan 16 20:20:19 localhost kernel: ostree=/ostree/boot.1/rhcos/752a3b0ead0e52e830bbd3e504c11b2606df057c884ebbf46dfca691978a83ed/0 Jan 16 20:20:19 localhost systemd-journald[305]: Missed 32 kernel messages Jan 16 20:20:19 localhost kernel: fuse: init (API version 7.36) Jan 16 20:20:19 localhost systemd-journald[305]: Journal started Jan 16 20:20:19 localhost systemd-journald[305]: Runtime Journal (/run/log/journal/efce106942834b6e8dcf2db4b261dcf3) is 8.0M, max 118.3M, 110.3M free. Jan 16 20:20:18 localhost systemd-sysusers[307]: Creating group 'nobody' with GID 65534. Jan 16 20:20:19 localhost systemd-modules-load[306]: Inserted module 'fuse' Jan 16 20:20:19 localhost systemd-modules-load[306]: Module 'msr' is built in Jan 16 20:20:20 localhost systemd[1]: Finished Afterburn Initrd Setup Network Kernel Arguments. Jan 16 20:20:20 localhost systemd[1]: Finished CoreOS: Touch /run/agetty.reload. Jan 16 20:20:20 localhost systemd[1]: Finished Create List of Static Device Nodes. Jan 16 20:20:20 localhost systemd[1]: Finished Load Kernel Modules. Jan 16 20:20:20 localhost systemd[1]: Finished Setup Virtual Console. Jan 16 20:20:20 localhost systemd[1]: Starting dracut ask for additional cmdline parameters... Jan 16 20:20:20 localhost systemd[1]: Starting Apply Kernel Variables... Jan 16 20:20:20 localhost systemd-sysusers[307]: Creating group 'sgx' with GID 999. Jan 16 20:20:20 localhost systemd-sysusers[307]: Creating group 'users' with GID 100. Jan 16 20:20:20 localhost systemd-sysusers[307]: Creating group 'root' with GID 998. Jan 16 20:20:20 localhost systemd-sysusers[307]: Creating group 'dbus' with GID 81. Jan 16 20:20:20 localhost systemd-sysusers[307]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81. Jan 16 20:20:20 localhost systemd[1]: Finished Create System Users. Jan 16 20:20:20 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Jan 16 20:20:20 localhost systemd[1]: Starting Create Volatile Files and Directories... Jan 16 20:20:20 localhost systemd[1]: Finished Apply Kernel Variables. Jan 16 20:20:20 localhost systemd[1]: Finished dracut ask for additional cmdline parameters. Jan 16 20:20:20 localhost systemd[1]: Finished Create Static Device Nodes in /dev. Jan 16 20:20:20 localhost systemd[1]: Finished Create Volatile Files and Directories. Jan 16 20:20:20 localhost systemd[1]: Starting dracut cmdline hook... Jan 16 20:20:20 localhost dracut-cmdline[330]: dracut-414.92.202310210434-0 dracut-057-21.git20230214.el9 Jan 16 20:20:20 localhost dracut-cmdline[330]: Using kernel command line parameters: ip=auto BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-752a3b0ead0e52e830bbd3e504c11b2606df057c884ebbf46dfca691978a83ed/vmlinuz-5.14.0-284.36.1.el9_2.x86_64 ignition.platform.id=qemu console=tty0 console=ttyS0,115200n8 ignition.firstboot ostree=/ostree/boot.1/rhcos/752a3b0ead0e52e830bbd3e504c11b2606df057c884ebbf46dfca691978a83ed/0 Jan 16 20:20:21 localhost systemd[1]: Finished dracut cmdline hook. Jan 16 20:20:21 localhost systemd[1]: Starting dracut pre-udev hook... Jan 16 20:20:22 localhost systemd-journald[305]: Missed 28 kernel messages Jan 16 20:20:22 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 16 20:20:22 localhost kernel: device-mapper: uevent: version 1.0.3 Jan 16 20:20:22 localhost kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jan 16 20:20:22 localhost systemd[1]: Finished dracut pre-udev hook. Jan 16 20:20:22 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Jan 16 20:20:22 localhost systemd-udevd[472]: Using default interface naming scheme 'rhel-9.0'. Jan 16 20:20:22 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Jan 16 20:20:22 localhost systemd[1]: Starting dracut pre-trigger hook... Jan 16 20:20:22 localhost dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Jan 16 20:20:22 localhost systemd[1]: Finished dracut pre-trigger hook. Jan 16 20:20:22 localhost systemd[1]: Starting Coldplug All udev Devices... Jan 16 20:20:23 localhost systemd[1]: sys-module-fuse.device: Failed to enqueue SYSTEMD_WANTS= job, ignoring: Unit sys-fs-fuse-connections.mount not found. Jan 16 20:20:23 localhost systemd[1]: Finished Coldplug All udev Devices. Jan 16 20:20:23 localhost systemd[1]: Starting Wait for udev To Complete Device Initialization... Jan 16 20:20:23 localhost udevadm[534]: systemd-udev-settle.service is deprecated. Please fix multipathd-configure.service not to pull it in. Jan 16 20:20:23 localhost systemd-journald[305]: Missed 12 kernel messages Jan 16 20:20:23 localhost kernel: virtio_blk virtio3: [vda] 67108864 512-byte logical blocks (34.4 GB/32.0 GiB) Jan 16 20:20:23 localhost kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 16 20:20:23 localhost kernel: GPT:33554431 != 67108863 Jan 16 20:20:23 localhost kernel: GPT:Alternate GPT header not at the end of the disk. Jan 16 20:20:23 localhost kernel: GPT:33554431 != 67108863 Jan 16 20:20:23 localhost kernel: GPT: Use GNU Parted to correct GPT errors. Jan 16 20:20:23 localhost kernel: vda: vda1 vda2 vda3 vda4 Jan 16 20:20:23 localhost kernel: libata version 3.00 loaded. Jan 16 20:20:23 localhost kernel: virtio_net virtio0 ens3: renamed from eth0 Jan 16 20:20:23 localhost kernel: ata_piix 0000:00:01.1: version 2.13 Jan 16 20:20:23 localhost kernel: virtio_net virtio1 ens4: renamed from eth1 Jan 16 20:20:23 localhost kernel: scsi host0: ata_piix Jan 16 20:20:23 localhost kernel: scsi host1: ata_piix Jan 16 20:20:23 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 16 20:20:23 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 16 20:20:24 localhost systemd[1]: Found device /dev/disk/by-label/root. Jan 16 20:20:25 localhost systemd[1]: Found device /dev/disk/by-label/boot. Jan 16 20:20:25 localhost systemd[1]: Finished Wait for udev To Complete Device Initialization. Jan 16 20:20:25 localhost systemd[1]: Starting Ensure Unique `boot` Filesystem Label... Jan 16 20:20:25 localhost systemd[1]: Device-Mapper Multipath Default Configuration was skipped because of an unmet condition check (ConditionKernelCommandLine=rd.multipath=default). Jan 16 20:20:25 localhost systemd[1]: Starting Device-Mapper Multipath Device Controller... Jan 16 20:20:25 localhost multipathd[608]: --------start up-------- Jan 16 20:20:25 localhost multipathd[608]: read /etc/multipath.conf Jan 16 20:20:25 localhost multipathd[608]: /etc/multipath.conf does not exist, blacklisting all devices. Jan 16 20:20:25 localhost multipathd[608]: You can run "/sbin/mpathconf --enable" to create Jan 16 20:20:25 localhost multipathd[608]: /etc/multipath.conf. See man mpathconf(8) for more details Jan 16 20:20:25 localhost multipathd[608]: /etc/multipath.conf does not exist, blacklisting all devices. Jan 16 20:20:25 localhost multipathd[608]: You can run "/sbin/mpathconf --enable" to create Jan 16 20:20:25 localhost multipathd[608]: /etc/multipath.conf. See man mpathconf(8) for more details Jan 16 20:20:25 localhost multipathd[608]: path checkers start up Jan 16 20:20:25 localhost systemd[1]: Started Device-Mapper Multipath Device Controller. Jan 16 20:20:25 localhost systemd[1]: Finished Ensure Unique `boot` Filesystem Label. Jan 16 20:20:25 localhost systemd[1]: Starting Generate New UUID For Boot Disk GPT... Jan 16 20:20:25 localhost systemd-journald[305]: Missed 18 kernel messages Jan 16 20:20:25 localhost kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 16 20:20:25 localhost coreos-gpt-setup[617]: Randomizing disk GUID Jan 16 20:20:25 localhost systemd-journald[305]: Missed 1 kernel messages Jan 16 20:20:25 localhost kernel: GPT:33554431 != 67108863 Jan 16 20:20:25 localhost kernel: GPT:Alternate GPT header not at the end of the disk. Jan 16 20:20:25 localhost kernel: GPT:33554431 != 67108863 Jan 16 20:20:25 localhost kernel: GPT: Use GNU Parted to correct GPT errors. Jan 16 20:20:25 localhost kernel: vda: vda1 vda2 vda3 vda4 Jan 16 20:20:25 localhost kernel: vda: vda1 vda2 vda3 vda4 Jan 16 20:20:26 localhost kernel: vda: vda1 vda2 vda3 vda4 Jan 16 20:20:26 localhost coreos-gpt-setup[621]: The operation has completed successfully. Jan 16 20:20:27 localhost systemd[1]: Finished Generate New UUID For Boot Disk GPT. Jan 16 20:20:27 localhost systemd[1]: Reached target Preparation for Local File Systems. Jan 16 20:20:27 localhost systemd[1]: Reached target Local File Systems. Jan 16 20:20:27 localhost systemd[1]: Reached target System Initialization. Jan 16 20:20:27 localhost systemd[1]: Reached target Basic System. Jan 16 20:20:27 localhost systemd[1]: Persist Osmet Files (ISO) was skipped because of an unmet condition check (ConditionKernelCommandLine=coreos.liveiso). Jan 16 20:20:27 localhost systemd[1]: CoreOS Secex Ignition Config Decryptor was skipped because of an unmet condition check (ConditionPathExists=/run/coreos/secure-execution). Jan 16 20:20:27 localhost systemd[1]: Starting Ignition OSTree: Regenerate Filesystem UUID (boot)... Jan 16 20:20:27 localhost ignition-ostree-firstboot-uuid[768]: e2fsck 1.46.5 (30-Dec-2021) Jan 16 20:20:27 localhost ignition-ostree-firstboot-uuid[768]: Pass 1: Checking inodes, blocks, and sizes Jan 16 20:20:27 localhost ignition-ostree-firstboot-uuid[768]: Pass 2: Checking directory structure Jan 16 20:20:27 localhost ignition-ostree-firstboot-uuid[768]: Pass 3: Checking directory connectivity Jan 16 20:20:27 localhost ignition-ostree-firstboot-uuid[768]: Pass 4: Checking reference counts Jan 16 20:20:27 localhost ignition-ostree-firstboot-uuid[768]: Pass 5: Checking group summary information Jan 16 20:20:27 localhost ignition-ostree-firstboot-uuid[768]: boot: 364/98304 files (0.3% non-contiguous), 146505/393216 blocks Jan 16 20:20:27 localhost systemd[1]: Finished Ignition OSTree: Regenerate Filesystem UUID (boot). Jan 16 20:20:27 localhost ignition-ostree-firstboot-uuid[781]: tune2fs 1.46.5 (30-Dec-2021) Jan 16 20:20:27 localhost systemd[1]: Starting CoreOS Ignition User Config Setup... Jan 16 20:20:27 localhost ignition-ostree-firstboot-uuid[764]: Regenerated UUID for /dev/disk/by-label/boot Jan 16 20:20:28 localhost systemd-journald[305]: Missed 20 kernel messages Jan 16 20:20:28 localhost kernel: EXT4-fs (vda3): mounted filesystem with ordered data mode. Quota mode: none. Jan 16 20:20:28 localhost coreos-ignition-setup-user[784]: File /mnt/boot_partition/ignition/config.ign does not exist.. Skipping copy Jan 16 20:20:28 localhost kernel: EXT4-fs (vda3): unmounting filesystem. Jan 16 20:20:28 localhost systemd[1]: Finished CoreOS Ignition User Config Setup. Jan 16 20:20:28 localhost systemd[1]: Starting Ignition (fetch-offline)... Jan 16 20:20:28 localhost ignition[793]: Ignition 2.16.2 Jan 16 20:20:28 localhost ignition[793]: Stage: fetch-offline Jan 16 20:20:28 localhost ignition[793]: reading system config file "/usr/lib/ignition/base.d/00-core.ign" Jan 16 20:20:28 localhost ignition[793]: parsing config with SHA512: ff6a5153be363997e4d5d3ea8cc4048373a457c48c4a5b134a08a30aacd167c1e0f099f0bdf1e24c99ad180628cd02b767b863b5fe3a8fce3fe1886847eb8e2e Jan 16 20:20:28 localhost ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 16 20:20:28 localhost ignition[793]: parsed url from cmdline: "" Jan 16 20:20:28 localhost ignition[793]: no config URL provided Jan 16 20:20:28 localhost ignition[793]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 20:20:28 localhost ignition[793]: no config at "/usr/lib/ignition/user.ign" Jan 16 20:20:28 localhost ignition[793]: op(1): [started] loading QEMU firmware config module Jan 16 20:20:28 localhost ignition[793]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 16 20:20:28 localhost ignition[793]: op(1): [finished] loading QEMU firmware config module Jan 16 20:20:38 localhost ignition[793]: Reading QEMU fw_cfg takes quadratic time. Consider moving large files or config fragments to a remote URL. Jan 16 20:20:38 localhost ignition[793]: Reading config from QEMU fw_cfg: 36/325 KB Jan 16 20:20:46 localhost ignition[793]: Reading config from QEMU fw_cfg: 48/325 KB Jan 16 20:20:52 localhost ignition[793]: Reading config from QEMU fw_cfg: 56/325 KB Jan 16 20:21:00 localhost ignition[793]: Reading config from QEMU fw_cfg: 64/325 KB Jan 16 20:21:09 localhost ignition[793]: Reading config from QEMU fw_cfg: 72/325 KB Jan 16 20:21:19 localhost ignition[793]: Reading config from QEMU fw_cfg: 80/325 KB Jan 16 20:21:24 localhost ignition[793]: Reading config from QEMU fw_cfg: 84/325 KB Jan 16 20:21:29 localhost ignition[793]: Reading config from QEMU fw_cfg: 88/325 KB Jan 16 20:21:35 localhost ignition[793]: Reading config from QEMU fw_cfg: 92/325 KB Jan 16 20:21:41 localhost ignition[793]: Reading config from QEMU fw_cfg: 96/325 KB Jan 16 20:21:47 localhost ignition[793]: Reading config from QEMU fw_cfg: 100/325 KB Jan 16 20:21:54 localhost ignition[793]: Reading config from QEMU fw_cfg: 104/325 KB Jan 16 20:22:01 localhost ignition[793]: Reading config from QEMU fw_cfg: 108/325 KB Jan 16 20:22:07 localhost ignition[793]: Reading config from QEMU fw_cfg: 112/325 KB Jan 16 20:22:14 localhost ignition[793]: Reading config from QEMU fw_cfg: 116/325 KB Jan 16 20:22:22 localhost ignition[793]: Reading config from QEMU fw_cfg: 120/325 KB Jan 16 20:22:29 localhost ignition[793]: Reading config from QEMU fw_cfg: 124/325 KB Jan 16 20:22:37 localhost ignition[793]: Reading config from QEMU fw_cfg: 128/325 KB Jan 16 20:22:46 localhost ignition[793]: Reading config from QEMU fw_cfg: 132/325 KB Jan 16 20:22:54 localhost ignition[793]: Reading config from QEMU fw_cfg: 136/325 KB Jan 16 20:23:03 localhost ignition[793]: Reading config from QEMU fw_cfg: 140/325 KB Jan 16 20:23:12 localhost ignition[793]: Reading config from QEMU fw_cfg: 144/325 KB Jan 16 20:23:21 localhost ignition[793]: Reading config from QEMU fw_cfg: 148/325 KB Jan 16 20:23:31 localhost ignition[793]: Reading config from QEMU fw_cfg: 152/325 KB Jan 16 20:23:41 localhost ignition[793]: Reading config from QEMU fw_cfg: 156/325 KB Jan 16 20:23:51 localhost ignition[793]: Reading config from QEMU fw_cfg: 160/325 KB Jan 16 20:24:02 localhost ignition[793]: Reading config from QEMU fw_cfg: 164/325 KB Jan 16 20:24:13 localhost ignition[793]: Reading config from QEMU fw_cfg: 168/325 KB Jan 16 20:24:25 localhost ignition[793]: Reading config from QEMU fw_cfg: 172/325 KB Jan 16 20:24:36 localhost ignition[793]: Reading config from QEMU fw_cfg: 176/325 KB Jan 16 20:24:48 localhost ignition[793]: Reading config from QEMU fw_cfg: 180/325 KB Jan 16 20:24:59 localhost ignition[793]: Reading config from QEMU fw_cfg: 184/325 KB Jan 16 20:25:12 localhost ignition[793]: Reading config from QEMU fw_cfg: 188/325 KB Jan 16 20:25:24 localhost ignition[793]: Reading config from QEMU fw_cfg: 192/325 KB Jan 16 20:25:37 localhost ignition[793]: Reading config from QEMU fw_cfg: 196/325 KB Jan 16 20:25:49 localhost ignition[793]: Reading config from QEMU fw_cfg: 200/325 KB Jan 16 20:26:03 localhost ignition[793]: Reading config from QEMU fw_cfg: 204/325 KB Jan 16 20:26:16 localhost ignition[793]: Reading config from QEMU fw_cfg: 208/325 KB Jan 16 20:26:30 localhost ignition[793]: Reading config from QEMU fw_cfg: 212/325 KB Jan 16 20:26:43 localhost ignition[793]: Reading config from QEMU fw_cfg: 216/325 KB Jan 16 20:26:57 localhost ignition[793]: Reading config from QEMU fw_cfg: 220/325 KB Jan 16 20:27:11 localhost ignition[793]: Reading config from QEMU fw_cfg: 224/325 KB Jan 16 20:27:28 localhost ignition[793]: Reading config from QEMU fw_cfg: 228/325 KB Jan 16 20:27:43 localhost ignition[793]: Reading config from QEMU fw_cfg: 232/325 KB Jan 16 20:27:59 localhost ignition[793]: Reading config from QEMU fw_cfg: 236/325 KB Jan 16 20:28:14 localhost ignition[793]: Reading config from QEMU fw_cfg: 240/325 KB Jan 16 20:28:30 localhost ignition[793]: Reading config from QEMU fw_cfg: 244/325 KB Jan 16 20:28:46 localhost ignition[793]: Reading config from QEMU fw_cfg: 248/325 KB Jan 16 20:29:02 localhost ignition[793]: Reading config from QEMU fw_cfg: 252/325 KB Jan 16 20:29:18 localhost ignition[793]: Reading config from QEMU fw_cfg: 256/325 KB Jan 16 20:29:34 localhost ignition[793]: Reading config from QEMU fw_cfg: 260/325 KB Jan 16 20:29:51 localhost ignition[793]: Reading config from QEMU fw_cfg: 264/325 KB Jan 16 20:30:08 localhost ignition[793]: Reading config from QEMU fw_cfg: 268/325 KB Jan 16 20:30:25 localhost ignition[793]: Reading config from QEMU fw_cfg: 272/325 KB Jan 16 20:30:43 localhost ignition[793]: Reading config from QEMU fw_cfg: 276/325 KB Jan 16 20:31:01 localhost ignition[793]: Reading config from QEMU fw_cfg: 280/325 KB Jan 16 20:31:18 localhost ignition[793]: Reading config from QEMU fw_cfg: 284/325 KB Jan 16 20:31:36 localhost ignition[793]: Reading config from QEMU fw_cfg: 288/325 KB Jan 16 20:31:54 localhost ignition[793]: Reading config from QEMU fw_cfg: 292/325 KB Jan 16 20:32:15 localhost ignition[793]: Reading config from QEMU fw_cfg: 296/325 KB Jan 16 20:32:34 localhost ignition[793]: Reading config from QEMU fw_cfg: 300/325 KB Jan 16 20:32:54 localhost ignition[793]: Reading config from QEMU fw_cfg: 304/325 KB Jan 16 20:33:15 localhost ignition[793]: Reading config from QEMU fw_cfg: 308/325 KB Jan 16 20:33:35 localhost ignition[793]: Reading config from QEMU fw_cfg: 312/325 KB Jan 16 20:33:56 localhost ignition[793]: Reading config from QEMU fw_cfg: 316/325 KB Jan 16 20:34:17 localhost ignition[793]: Reading config from QEMU fw_cfg: 320/325 KB Jan 16 20:34:39 localhost ignition[793]: Reading config from QEMU fw_cfg: 324/325 KB Jan 16 20:35:01 localhost ignition[793]: Reading config from QEMU fw_cfg: 325/325 KB Jan 16 20:35:01 localhost ignition[793]: parsing config with SHA512: db67f6da8985302fb4bce143cb40d333493648fc3ec92099192c70b5311cf5a8431e35a004e2cc94939fc14cf2ec78ab890ac5cd5069ae3a5b70c3f8e6f87277 Jan 16 20:35:01 localhost ignition[793]: fetched base config from "system" Jan 16 20:35:01 localhost ignition[793]: fetched user config from "qemu" Jan 16 20:35:01 localhost ignition[793]: fetch-offline: fetch-offline passed Jan 16 20:35:01 localhost systemd[1]: Finished Ignition (fetch-offline). Jan 16 20:35:01 localhost ignition[793]: Ignition finished successfully Jan 16 20:35:01 localhost systemd[1]: CoreOS Enable Network was skipped because no trigger condition checks were met. Jan 16 20:35:01 localhost systemd[1]: Starting Copy CoreOS Firstboot Networking Config... Jan 16 20:35:01 localhost systemd-journald[305]: Missed 88 kernel messages Jan 16 20:35:01 localhost kernel: EXT4-fs (vda3): mounted filesystem with ordered data mode. Quota mode: none. Jan 16 20:35:01 localhost coreos-copy-firstboot-network[802]: info: no files to copy from /mnt/boot_partition/coreos-firstboot-network; skipping Jan 16 20:35:02 localhost systemd-journald[305]: Missed 1 kernel messages Jan 16 20:35:02 localhost kernel: EXT4-fs (vda3): unmounting filesystem. Jan 16 20:35:02 localhost systemd[1]: Finished Copy CoreOS Firstboot Networking Config. Jan 16 20:35:02 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). Jan 16 20:35:02 localhost systemd[1]: Reached target Network. Jan 16 20:35:02 localhost systemd[1]: Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 16 20:35:02 localhost systemd[1]: Starting Ignition (kargs)... Jan 16 20:35:02 localhost systemd[1]: Starting Ignition OSTree: Detect Partition Transposition... Jan 16 20:35:02 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). Jan 16 20:35:02 localhost systemd[1]: Starting dracut initqueue hook... Jan 16 20:35:02 localhost ignition[811]: Ignition 2.16.2 Jan 16 20:35:02 localhost systemd[1]: Starting RHCOS Check For Legacy LUKS Configuration... Jan 16 20:35:02 localhost ignition[811]: Stage: kargs Jan 16 20:35:02 localhost ignition[811]: reading system config file "/usr/lib/ignition/base.d/00-core.ign" Jan 16 20:35:02 localhost systemd[1]: Finished Ignition (kargs). Jan 16 20:35:02 localhost ignition[811]: parsing config with SHA512: ff6a5153be363997e4d5d3ea8cc4048373a457c48c4a5b134a08a30aacd167c1e0f099f0bdf1e24c99ad180628cd02b767b863b5fe3a8fce3fe1886847eb8e2e Jan 16 20:35:02 localhost systemd[1]: Finished dracut initqueue hook. Jan 16 20:35:02 localhost ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 16 20:35:02 localhost systemd[1]: Reached target Preparation for Remote File Systems. Jan 16 20:35:02 localhost ignition[811]: kargs: kargs passed Jan 16 20:35:02 localhost systemd[1]: Reached target Remote Encrypted Volumes. Jan 16 20:35:02 localhost ignition[811]: Ignition finished successfully Jan 16 20:35:02 localhost systemd[1]: Reached target Remote File Systems. Jan 16 20:35:02 localhost systemd[1]: Acquire Live PXE rootfs Image was skipped because of an unmet condition check (ConditionPathExists=/run/ostree-live). Jan 16 20:35:02 localhost systemd[1]: Persist Osmet Files (PXE) was skipped because of an unmet condition check (ConditionPathExists=/run/ostree-live). Jan 16 20:35:02 localhost systemd[1]: Starting dracut pre-mount hook... Jan 16 20:35:02 localhost systemd[1]: Starting Check for FIPS mode... Jan 16 20:35:02 localhost systemd[1]: Finished RHCOS Check For Legacy LUKS Configuration. Jan 16 20:35:02 localhost systemd[1]: Finished dracut pre-mount hook. Jan 16 20:35:02 localhost systemd[1]: Finished Ignition OSTree: Detect Partition Transposition. Jan 16 20:35:02 localhost systemd[1]: Ignition OSTree: Save Partitions was skipped because of an unmet condition check (ConditionPathIsDirectory=/run/ignition-ostree-transposefs). Jan 16 20:35:03 localhost rhcos-fips[850]: Found /etc/ignition-machine-config-encapsulated.json in Ignition config Jan 16 20:35:03 localhost rhcos-fips[850]: FIPS mode not requested Jan 16 20:35:03 localhost systemd[1]: Finished Check for FIPS mode. Jan 16 20:35:03 localhost systemd[1]: CoreOS Kernel Arguments Reboot was skipped because of an unmet condition check (ConditionPathExists=/run/coreos-kargs-reboot). Jan 16 20:35:03 localhost systemd[1]: Starting Ignition (disks)... Jan 16 20:35:03 localhost ignition[876]: Ignition 2.16.2 Jan 16 20:35:03 localhost ignition[876]: Stage: disks Jan 16 20:35:03 localhost ignition[876]: reading system config file "/usr/lib/ignition/base.d/00-core.ign" Jan 16 20:35:03 localhost ignition[876]: parsing config with SHA512: ff6a5153be363997e4d5d3ea8cc4048373a457c48c4a5b134a08a30aacd167c1e0f099f0bdf1e24c99ad180628cd02b767b863b5fe3a8fce3fe1886847eb8e2e Jan 16 20:35:03 localhost ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 16 20:35:03 localhost ignition[876]: disks: disks passed Jan 16 20:35:03 localhost ignition[876]: Ignition finished successfully Jan 16 20:35:03 localhost systemd[1]: Finished Ignition (disks). Jan 16 20:35:03 localhost systemd[1]: Reached target Initrd Root Device. Jan 16 20:35:03 localhost systemd[1]: Starting CoreOS Ensure Unique Boot Filesystem... Jan 16 20:35:03 localhost systemd-journald[305]: Missed 43 kernel messages Jan 16 20:35:03 localhost kernel: vda: vda1 vda2 vda3 vda4 Jan 16 20:35:04 localhost systemd[1]: Finished CoreOS Ensure Unique Boot Filesystem. Jan 16 20:35:04 localhost systemd[1]: Starting Ignition OSTree: Regenerate Filesystem UUID (root)... Jan 16 20:35:04 localhost systemd[1]: Afterburn (Check In - from the initramfs) was skipped because of an unmet condition check (ConditionKernelCommandLine=ignition.platform.id=azure). Jan 16 20:35:05 localhost ignition-ostree-firstboot-uuid[943]: Clearing log and setting UUID Jan 16 20:35:05 localhost ignition-ostree-firstboot-uuid[943]: writing all SBs Jan 16 20:35:05 localhost ignition-ostree-firstboot-uuid[943]: new UUID = 45961086-2073-4e1e-9449-be5affdc08c1 Jan 16 20:35:05 localhost systemd[1]: Finished Ignition OSTree: Regenerate Filesystem UUID (root). Jan 16 20:35:05 localhost ignition-ostree-firstboot-uuid[940]: Regenerated UUID for /dev/disk/by-label/root Jan 16 20:35:05 localhost systemd[1]: Starting Ignition OSTree: Grow Root Filesystem... Jan 16 20:35:06 localhost systemd-journald[305]: Missed 9 kernel messages Jan 16 20:35:06 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled Jan 16 20:35:06 localhost kernel: XFS (vda4): Mounting V5 Filesystem Jan 16 20:35:06 localhost kernel: XFS (vda4): Ending clean mount Jan 16 20:35:07 localhost ignition-ostree-growfs[972]: CHANGED: partition=4 start=1050624 old: size=6313984 end=7364607 new: size=66058207 end=67108830 Jan 16 20:35:08 localhost ignition-ostree-growfs[1099]: meta-data=/dev/vda4 isize=512 agcount=4, agsize=197312 blks Jan 16 20:35:08 localhost ignition-ostree-growfs[1099]: = sectsz=512 attr=2, projid32bit=1 Jan 16 20:35:08 localhost ignition-ostree-growfs[1099]: = crc=1 finobt=1, sparse=1, rmapbt=0 Jan 16 20:35:08 localhost ignition-ostree-growfs[1099]: = reflink=1 bigtime=1 inobtcount=1 Jan 16 20:35:08 localhost ignition-ostree-growfs[1099]: data = bsize=4096 blocks=789248, imaxpct=25 Jan 16 20:35:08 localhost ignition-ostree-growfs[1099]: = sunit=0 swidth=0 blks Jan 16 20:35:08 localhost ignition-ostree-growfs[1099]: naming =version 2 bsize=4096 ascii-ci=0, ftype=1 Jan 16 20:35:08 localhost ignition-ostree-growfs[1099]: log =internal log bsize=4096 blocks=16384, version=2 Jan 16 20:35:08 localhost ignition-ostree-growfs[1099]: = sectsz=512 sunit=0 blks, lazy-count=1 Jan 16 20:35:08 localhost ignition-ostree-growfs[1099]: realtime =none extsz=4096 blocks=0, rtextents=0 Jan 16 20:35:08 localhost ignition-ostree-growfs[1099]: data blocks changed from 789248 to 8257275 Jan 16 20:35:08 localhost systemd-journald[305]: Missed 2 kernel messages Jan 16 20:35:08 localhost kernel: XFS (vda4): Unmounting Filesystem Jan 16 20:35:08 localhost systemd[1]: Finished Ignition OSTree: Grow Root Filesystem. Jan 16 20:35:08 localhost systemd[1]: Starting Ignition OSTree: Autosave XFS Rootfs Partition... Jan 16 20:35:08 localhost ignition-ostree-transposefs[1104]: autosave-xfs: /dev/disk/by-label/root agcount=42 is lower than threshold=400 Jan 16 20:35:08 localhost systemd[1]: Finished Ignition OSTree: Autosave XFS Rootfs Partition. Jan 16 20:35:08 localhost systemd[1]: Ignition OSTree: Restore Partitions was skipped because of an unmet condition check (ConditionPathIsDirectory=/run/ignition-ostree-transposefs). Jan 16 20:35:08 localhost systemd[1]: Starting Determine root FS mount option flags... Jan 16 20:35:08 localhost systemd[1]: Finished Determine root FS mount option flags. Jan 16 20:35:08 localhost systemd[1]: Mounting /sysroot... Jan 16 20:35:08 localhost systemd-journald[305]: Missed 18 kernel messages Jan 16 20:35:08 localhost kernel: XFS (vda4): Mounting V5 Filesystem Jan 16 20:35:09 localhost kernel: XFS (vda4): Ending clean mount Jan 16 20:35:09 localhost kernel: XFS (vda4): Quotacheck needed: Please wait. Jan 16 20:35:09 localhost kernel: XFS (vda4): Quotacheck: Done. Jan 16 20:35:10 localhost systemd[1]: Mounted /sysroot. Jan 16 20:35:10 localhost systemd[1]: Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 16 20:35:10 localhost systemd[1]: Starting OSTree Prepare OS/... Jan 16 20:35:10 localhost ostree-prepare-root[1136]: preparing sysroot at /sysroot Jan 16 20:35:10 localhost ostree-prepare-root[1136]: Resolved OSTree target to: /sysroot/ostree/deploy/rhcos/deploy/672dcb8e1365f3248f184ad8efb8427f00bf75530b8df8375ee9778c1b2554ff.0 Jan 16 20:35:10 localhost ostree-prepare-root[1136]: filesystem at /sysroot currently writable: 1 Jan 16 20:35:10 localhost ostree-prepare-root[1136]: sysroot.readonly configuration value: 1 Jan 16 20:35:10 localhost systemd[1]: Finished OSTree Prepare OS/. Jan 16 20:35:10 localhost systemd[1]: Reached target Initrd Root File System. Jan 16 20:35:10 localhost systemd[1]: Afterburn Hostname was skipped because no trigger condition checks were met. Jan 16 20:35:10 localhost systemd[1]: Starting Ignition OSTree: Check Root Filesystem Size... Jan 16 20:35:10 localhost systemd[1]: Starting Mount OSTree /var... Jan 16 20:35:10 localhost systemd[1]: Finished Ignition OSTree: Check Root Filesystem Size. Jan 16 20:35:10 localhost ignition-ostree-mount-var[1142]: Mounting /sysroot/sysroot/ostree/deploy/rhcos/var Jan 16 20:35:10 localhost systemd[1]: Finished Mount OSTree /var. Jan 16 20:35:10 localhost systemd[1]: Starting Ignition (mount)... Jan 16 20:35:10 localhost ignition[1150]: Ignition 2.16.2 Jan 16 20:35:10 localhost ignition[1150]: Stage: mount Jan 16 20:35:10 localhost ignition[1150]: reading system config file "/usr/lib/ignition/base.d/00-core.ign" Jan 16 20:35:10 localhost ignition[1150]: parsing config with SHA512: ff6a5153be363997e4d5d3ea8cc4048373a457c48c4a5b134a08a30aacd167c1e0f099f0bdf1e24c99ad180628cd02b767b863b5fe3a8fce3fe1886847eb8e2e Jan 16 20:35:10 localhost ignition[1150]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 16 20:35:10 localhost systemd[1]: Finished Ignition (mount). Jan 16 20:35:10 localhost ignition[1150]: mount: mount passed Jan 16 20:35:10 localhost ignition[1150]: Ignition finished successfully Jan 16 20:35:10 localhost systemd[1]: Clear SSSD NSS Cache in Persistent /var was skipped because of an unmet condition check (ConditionPathExists=/sysroot/var/lib/sss/mc). Jan 16 20:35:10 localhost systemd[1]: Starting Populate OSTree /var... Jan 16 20:35:10 localhost systemd-tmpfiles[1158]: Failed to parse ACL "default:group:tss:rwx": No such file or directory. Ignoring Jan 16 20:35:10 localhost systemd-tmpfiles[1158]: Failed to parse ACL "default:group:tss:rwx": No such file or directory. Ignoring Jan 16 20:35:11 localhost ignition-ostree-populate-var[1160]: Relabeled /sysroot//var/home from to system_u:object_r:home_root_t:s0 Jan 16 20:35:11 localhost systemd-tmpfiles[1161]: Failed to parse ACL "default:group:tss:rwx": No such file or directory. Ignoring Jan 16 20:35:11 localhost systemd-tmpfiles[1161]: Failed to parse ACL "default:group:tss:rwx": No such file or directory. Ignoring Jan 16 20:35:12 localhost ignition-ostree-populate-var[1163]: Relabeled /sysroot//var/roothome from to system_u:object_r:admin_home_t:s0 Jan 16 20:35:12 localhost ignition-ostree-populate-var[1163]: Relabeled /sysroot//var/roothome/.bashrc from to system_u:object_r:admin_home_t:s0 Jan 16 20:35:12 localhost ignition-ostree-populate-var[1163]: Relabeled /sysroot//var/roothome/.bash_profile from to system_u:object_r:admin_home_t:s0 Jan 16 20:35:12 localhost ignition-ostree-populate-var[1163]: Relabeled /sysroot//var/roothome/.bash_logout from to system_u:object_r:admin_home_t:s0 Jan 16 20:35:12 localhost systemd-tmpfiles[1164]: Failed to parse ACL "default:group:tss:rwx": No such file or directory. Ignoring Jan 16 20:35:12 localhost systemd-tmpfiles[1164]: Failed to parse ACL "default:group:tss:rwx": No such file or directory. Ignoring Jan 16 20:35:13 localhost ignition-ostree-populate-var[1166]: Relabeled /sysroot//var/opt from to system_u:object_r:var_t:s0 Jan 16 20:35:13 localhost systemd-tmpfiles[1167]: Failed to parse ACL "default:group:tss:rwx": No such file or directory. Ignoring Jan 16 20:35:13 localhost systemd-tmpfiles[1167]: Failed to parse ACL "default:group:tss:rwx": No such file or directory. Ignoring Jan 16 20:35:13 localhost ignition-ostree-populate-var[1169]: Relabeled /sysroot//var/srv from to system_u:object_r:var_t:s0 Jan 16 20:35:13 localhost systemd-tmpfiles[1170]: Failed to parse ACL "default:group:tss:rwx": No such file or directory. Ignoring Jan 16 20:35:13 localhost systemd-tmpfiles[1170]: Failed to parse ACL "default:group:tss:rwx": No such file or directory. Ignoring Jan 16 20:35:14 localhost ignition-ostree-populate-var[1172]: Relabeled /sysroot//var/usrlocal from to system_u:object_r:usr_t:s0 Jan 16 20:35:14 localhost ignition-ostree-populate-var[1172]: Relabeled /sysroot//var/usrlocal/bin from to system_u:object_r:bin_t:s0 Jan 16 20:35:14 localhost ignition-ostree-populate-var[1172]: Relabeled /sysroot//var/usrlocal/etc from to system_u:object_r:usr_t:s0 Jan 16 20:35:14 localhost ignition-ostree-populate-var[1172]: Relabeled /sysroot//var/usrlocal/games from to system_u:object_r:usr_t:s0 Jan 16 20:35:14 localhost ignition-ostree-populate-var[1172]: Relabeled /sysroot//var/usrlocal/include from to system_u:object_r:usr_t:s0 Jan 16 20:35:14 localhost ignition-ostree-populate-var[1172]: Relabeled /sysroot//var/usrlocal/lib from to system_u:object_r:lib_t:s0 Jan 16 20:35:14 localhost ignition-ostree-populate-var[1172]: Relabeled /sysroot//var/usrlocal/man from to system_u:object_r:usr_t:s0 Jan 16 20:35:14 localhost ignition-ostree-populate-var[1172]: Relabeled /sysroot//var/usrlocal/sbin from to system_u:object_r:bin_t:s0 Jan 16 20:35:14 localhost ignition-ostree-populate-var[1172]: Relabeled /sysroot//var/usrlocal/share from to system_u:object_r:usr_t:s0 Jan 16 20:35:14 localhost ignition-ostree-populate-var[1172]: Relabeled /sysroot//var/usrlocal/src from to system_u:object_r:usr_t:s0 Jan 16 20:35:14 localhost systemd-tmpfiles[1173]: Failed to parse ACL "default:group:tss:rwx": No such file or directory. Ignoring Jan 16 20:35:14 localhost systemd-tmpfiles[1173]: Failed to parse ACL "default:group:tss:rwx": No such file or directory. Ignoring Jan 16 20:35:14 localhost ignition-ostree-populate-var[1175]: Relabeled /sysroot//var/mnt from to system_u:object_r:mnt_t:s0 Jan 16 20:35:14 localhost systemd-tmpfiles[1176]: Failed to parse ACL "default:group:tss:rwx": No such file or directory. Ignoring Jan 16 20:35:14 localhost systemd-tmpfiles[1176]: Failed to parse ACL "default:group:tss:rwx": No such file or directory. Ignoring Jan 16 20:35:15 localhost systemd[1]: Finished Populate OSTree /var. Jan 16 20:35:15 localhost systemd[1]: Starting Ignition (files)... Jan 16 20:35:15 localhost ignition[1179]: Ignition 2.16.2 Jan 16 20:35:15 localhost ignition[1179]: Stage: files Jan 16 20:35:15 localhost ignition[1179]: reading system config file "/usr/lib/ignition/base.d/00-core.ign" Jan 16 20:35:15 localhost ignition[1179]: parsing config with SHA512: ff6a5153be363997e4d5d3ea8cc4048373a457c48c4a5b134a08a30aacd167c1e0f099f0bdf1e24c99ad180628cd02b767b863b5fe3a8fce3fe1886847eb8e2e Jan 16 20:35:15 localhost ignition[1179]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 16 20:35:15 localhost ignition[1179]: files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 16 20:35:15 localhost ignition[1179]: files: ensureUsers: op(1): executing: "useradd" "--root" "/sysroot" "--create-home" "--password" "*" "--comment" "CoreOS Admin" "--groups" "adm,sudo,systemd-journal,wheel" "core" Jan 16 20:35:15 localhost ignition[1179]: files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 16 20:35:15 localhost ignition[1179]: files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 16 20:35:15 localhost ignition[1179]: wrote ssh authorized keys file for user: core Jan 16 20:35:15 localhost ignition[1179]: files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/ironic-network.env" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/ironic-network.env" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/ignition-machine-config-encapsulated.json" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/ignition-machine-config-encapsulated.json" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(5): [started] appending to file "/sysroot/etc/motd" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(5): [finished] appending to file "/sysroot/etc/motd" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/ironic.env" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/ironic.env" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/containers/registries.conf" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/containers/registries.conf" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/profile.d/proxy.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/profile.d/proxy.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/var/usrlocal/bin/release-image-download.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/var/usrlocal/bin/release-image-download.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/NetworkManager/system-connections/nmconnection" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/NetworkManager/system-connections/nmconnection" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/var/usrlocal/bin/bootkube.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/var/usrlocal/bin/bootkube.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/var/usrlocal/bin/bootstrap-cluster-gather.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/var/usrlocal/bin/bootstrap-cluster-gather.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/var/usrlocal/bin/bootstrap-pivot.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/var/usrlocal/bin/bootstrap-pivot.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/var/usrlocal/bin/bootstrap-service-record.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/var/usrlocal/bin/bootstrap-service-record.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/var/usrlocal/bin/bootstrap-verify-api-server-urls.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/var/usrlocal/bin/bootstrap-verify-api-server-urls.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/var/usrlocal/bin/crio-configure.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/var/usrlocal/bin/crio-configure.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/var/usrlocal/bin/installer-gather.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/var/usrlocal/bin/installer-gather.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/var/usrlocal/bin/installer-masters-gather.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/var/usrlocal/bin/installer-masters-gather.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/var/usrlocal/bin/kubelet-pause-image.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/var/usrlocal/bin/kubelet-pause-image.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/var/usrlocal/bin/kubelet.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/var/usrlocal/bin/kubelet.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(15): [started] writing file "/sysroot/var/usrlocal/bin/build-ironic-env.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(15): [finished] writing file "/sysroot/var/usrlocal/bin/build-ironic-env.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(16): [started] writing file "/sysroot/var/usrlocal/bin/release-image.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(16): [finished] writing file "/sysroot/var/usrlocal/bin/release-image.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(17): [started] writing file "/sysroot/var/usrlocal/bin/report-progress.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(17): [finished] writing file "/sysroot/var/usrlocal/bin/report-progress.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(18): [started] writing file "/sysroot/etc/NetworkManager/conf.d/99-baremetal.conf" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(18): [finished] writing file "/sysroot/etc/NetworkManager/conf.d/99-baremetal.conf" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(19): [started] writing file "/sysroot/etc/NetworkManager/dispatcher.d/30-local-dns-prepender" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(19): [finished] writing file "/sysroot/etc/NetworkManager/dispatcher.d/30-local-dns-prepender" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(1a): [started] writing file "/sysroot/var/usrlocal/bin/approve-csr.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(1a): [finished] writing file "/sysroot/var/usrlocal/bin/approve-csr.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(1b): [started] writing file "/sysroot/etc/containers/systemd/image-customization.container" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(1b): [finished] writing file "/sysroot/etc/containers/systemd/image-customization.container" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(1c): [started] writing file "/sysroot/etc/containers/systemd/ironic-dnsmasq.container" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(1c): [finished] writing file "/sysroot/etc/containers/systemd/ironic-dnsmasq.container" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(1d): [started] writing file "/sysroot/etc/containers/systemd/ironic-httpd.container" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(1d): [finished] writing file "/sysroot/etc/containers/systemd/ironic-httpd.container" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(1e): [started] writing file "/sysroot/etc/containers/systemd/ironic-inspector.container" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(1e): [finished] writing file "/sysroot/etc/containers/systemd/ironic-inspector.container" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(1f): [started] writing file "/sysroot/etc/containers/systemd/ironic-ramdisk-logs.container" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(1f): [finished] writing file "/sysroot/etc/containers/systemd/ironic-ramdisk-logs.container" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(20): [started] writing file "/sysroot/etc/containers/systemd/ironic.container" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(20): [finished] writing file "/sysroot/etc/containers/systemd/ironic.container" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(21): [started] writing file "/sysroot/etc/containers/systemd/ironic.volume" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(21): [finished] writing file "/sysroot/etc/containers/systemd/ironic.volume" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(22): [started] writing file "/sysroot/var/roothome/.docker/config.json" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(22): [finished] writing file "/sysroot/var/roothome/.docker/config.json" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(23): [started] writing file "/sysroot/etc/systemd/system.conf.d/10-default-env.conf" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(23): [finished] writing file "/sysroot/etc/systemd/system.conf.d/10-default-env.conf" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(24): [started] writing file "/sysroot/var/opt/openshift/original_cvo_overrides.patch" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(24): [finished] writing file "/sysroot/var/opt/openshift/original_cvo_overrides.patch" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(25): [started] writing file "/sysroot/var/usrlocal/bin/start-provisioning-nic.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(25): [finished] writing file "/sysroot/var/usrlocal/bin/start-provisioning-nic.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(26): [started] writing file "/sysroot/var/usrlocal/bin/setup-image-data.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(26): [finished] writing file "/sysroot/var/usrlocal/bin/setup-image-data.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(27): [started] writing file "/sysroot/var/usrlocal/bin/prov-iptables.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(27): [finished] writing file "/sysroot/var/usrlocal/bin/prov-iptables.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(28): [started] writing file "/sysroot/var/usrlocal/bin/master-bmh-update.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(28): [finished] writing file "/sysroot/var/usrlocal/bin/master-bmh-update.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(29): [started] writing file "/sysroot/var/usrlocal/bin/dhcp-filter.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(29): [finished] writing file "/sysroot/var/usrlocal/bin/dhcp-filter.sh" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(2a): [started] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-localhost-server.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(2a): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-localhost-server.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(2b): [started] writing file "/sysroot/var/opt/openshift/tls/aggregator-client.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(2b): [finished] writing file "/sysroot/var/opt/openshift/tls/aggregator-client.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(2c): [started] writing file "/sysroot/var/opt/openshift/manifests/openshift-install.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(2c): [finished] writing file "/sysroot/var/opt/openshift/manifests/openshift-install.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(2d): [started] writing file "/sysroot/var/opt/openshift/tls/root-ca.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(2d): [finished] writing file "/sysroot/var/opt/openshift/tls/root-ca.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(2e): [started] writing file "/sysroot/var/opt/openshift/tls/journal-gatewayd.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(2e): [finished] writing file "/sysroot/var/opt/openshift/tls/journal-gatewayd.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(2f): [started] writing file "/sysroot/var/opt/openshift/tls/journal-gatewayd.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(2f): [finished] writing file "/sysroot/var/opt/openshift/tls/journal-gatewayd.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(30): [started] writing file "/sysroot/var/opt/openshift/tls/service-account.pub" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(30): [finished] writing file "/sysroot/var/opt/openshift/tls/service-account.pub" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(31): [started] writing file "/sysroot/var/opt/openshift/tls/service-account.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(31): [finished] writing file "/sysroot/var/opt/openshift/tls/service-account.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(32): [started] writing file "/sysroot/var/opt/openshift/manifests/cluster-config.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(32): [finished] writing file "/sysroot/var/opt/openshift/manifests/cluster-config.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(33): [started] writing file "/sysroot/var/opt/openshift/manifests/cluster-dns-02-config.yml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(33): [finished] writing file "/sysroot/var/opt/openshift/manifests/cluster-dns-02-config.yml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(34): [started] writing file "/sysroot/var/opt/openshift/manifests/cluster-infrastructure-02-config.yml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(34): [finished] writing file "/sysroot/var/opt/openshift/manifests/cluster-infrastructure-02-config.yml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(35): [started] writing file "/sysroot/var/opt/openshift/manifests/cluster-ingress-02-config.yml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(35): [finished] writing file "/sysroot/var/opt/openshift/manifests/cluster-ingress-02-config.yml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(36): [started] writing file "/sysroot/var/opt/openshift/manifests/cluster-network-01-crd.yml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(36): [finished] writing file "/sysroot/var/opt/openshift/manifests/cluster-network-01-crd.yml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(37): [started] writing file "/sysroot/var/opt/openshift/manifests/cluster-network-02-config.yml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(37): [finished] writing file "/sysroot/var/opt/openshift/manifests/cluster-network-02-config.yml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(38): [started] writing file "/sysroot/var/opt/openshift/manifests/cluster-proxy-01-config.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(38): [finished] writing file "/sysroot/var/opt/openshift/manifests/cluster-proxy-01-config.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(39): [started] writing file "/sysroot/var/opt/openshift/manifests/cluster-scheduler-02-config.yml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(39): [finished] writing file "/sysroot/var/opt/openshift/manifests/cluster-scheduler-02-config.yml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(3a): [started] writing file "/sysroot/var/opt/openshift/manifests/cvo-overrides.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(3a): [finished] writing file "/sysroot/var/opt/openshift/manifests/cvo-overrides.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(3b): [started] writing file "/sysroot/var/opt/openshift/manifests/kube-cloud-config.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(3b): [finished] writing file "/sysroot/var/opt/openshift/manifests/kube-cloud-config.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(3c): [started] writing file "/sysroot/var/opt/openshift/manifests/kube-system-configmap-root-ca.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(3c): [finished] writing file "/sysroot/var/opt/openshift/manifests/kube-system-configmap-root-ca.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(3d): [started] writing file "/sysroot/var/opt/openshift/manifests/machine-config-server-tls-secret.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(3d): [finished] writing file "/sysroot/var/opt/openshift/manifests/machine-config-server-tls-secret.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(3e): [started] writing file "/sysroot/var/opt/openshift/manifests/openshift-config-secret-pull-secret.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(3e): [finished] writing file "/sysroot/var/opt/openshift/manifests/openshift-config-secret-pull-secret.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(3f): [started] writing file "/sysroot/var/opt/openshift/openshift/99_baremetal-provisioning-config.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(3f): [finished] writing file "/sysroot/var/opt/openshift/openshift/99_baremetal-provisioning-config.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(40): [started] writing file "/sysroot/var/opt/openshift/openshift/99_feature-gate.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(40): [finished] writing file "/sysroot/var/opt/openshift/openshift/99_feature-gate.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(41): [started] writing file "/sysroot/var/opt/openshift/openshift/99_kubeadmin-password-secret.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(41): [finished] writing file "/sysroot/var/opt/openshift/openshift/99_kubeadmin-password-secret.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(42): [started] writing file "/sysroot/var/opt/openshift/openshift/openshift-install-manifests.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(42): [finished] writing file "/sysroot/var/opt/openshift/openshift/openshift-install-manifests.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(43): [started] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_master-user-data-secret.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(43): [finished] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_master-user-data-secret.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(44): [started] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-machineconfig_99-master-ssh.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(44): [finished] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-machineconfig_99-master-ssh.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(45): [started] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_host-bmc-secrets-0.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(45): [finished] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_host-bmc-secrets-0.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(46): [started] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_host-bmc-secrets-1.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(46): [finished] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_host-bmc-secrets-1.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(47): [started] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_host-bmc-secrets-2.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(47): [finished] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_host-bmc-secrets-2.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(48): [started] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_host-bmc-secrets-3.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(48): [finished] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_host-bmc-secrets-3.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(49): [started] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_host-bmc-secrets-4.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(49): [finished] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_host-bmc-secrets-4.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(4a): [started] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_hosts-0.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(4a): [finished] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_hosts-0.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(4b): [started] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_hosts-1.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(4b): [finished] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_hosts-1.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(4c): [started] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_hosts-2.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(4c): [finished] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_hosts-2.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(4d): [started] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_hosts-3.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(4d): [finished] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_hosts-3.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(4e): [started] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_hosts-4.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(4e): [finished] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_hosts-4.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(4f): [started] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_master-machines-0.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(4f): [finished] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_master-machines-0.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(50): [started] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_master-machines-1.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(50): [finished] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_master-machines-1.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(51): [started] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_master-machines-2.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(51): [finished] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_master-machines-2.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(52): [started] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_worker-user-data-secret.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(52): [finished] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_worker-user-data-secret.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(53): [started] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-machineconfig_99-worker-ssh.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(53): [finished] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-machineconfig_99-worker-ssh.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(54): [started] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_worker-machineset-0.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(54): [finished] writing file "/sysroot/var/opt/openshift/openshift/99_openshift-cluster-api_worker-machineset-0.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(55): [started] writing file "/sysroot/var/opt/metal3/auth/clouds.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(55): [finished] writing file "/sysroot/var/opt/metal3/auth/clouds.yaml" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(56): [started] writing file "/sysroot/var/opt/openshift/auth/kubeconfig" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(56): [finished] writing file "/sysroot/var/opt/openshift/auth/kubeconfig" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(57): [started] writing file "/sysroot/var/opt/openshift/auth/kubeconfig-kubelet" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(57): [finished] writing file "/sysroot/var/opt/openshift/auth/kubeconfig-kubelet" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(58): [started] writing file "/sysroot/var/opt/openshift/auth/kubeconfig-loopback" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(58): [finished] writing file "/sysroot/var/opt/openshift/auth/kubeconfig-loopback" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(59): [started] writing file "/sysroot/var/opt/openshift/tls/admin-kubeconfig-ca-bundle.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(59): [finished] writing file "/sysroot/var/opt/openshift/tls/admin-kubeconfig-ca-bundle.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(5a): [started] writing file "/sysroot/var/opt/openshift/tls/aggregator-ca.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(5a): [finished] writing file "/sysroot/var/opt/openshift/tls/aggregator-ca.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(5b): [started] writing file "/sysroot/var/opt/openshift/tls/aggregator-ca.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(5b): [finished] writing file "/sysroot/var/opt/openshift/tls/aggregator-ca.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(5c): [started] writing file "/sysroot/var/opt/openshift/tls/aggregator-ca-bundle.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(5c): [finished] writing file "/sysroot/var/opt/openshift/tls/aggregator-ca-bundle.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(5d): [started] writing file "/sysroot/var/opt/openshift/tls/machine-config-server.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(5d): [finished] writing file "/sysroot/var/opt/openshift/tls/machine-config-server.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(5e): [started] writing file "/sysroot/var/opt/openshift/tls/aggregator-client.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(5e): [finished] writing file "/sysroot/var/opt/openshift/tls/aggregator-client.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(5f): [started] writing file "/sysroot/var/opt/openshift/tls/aggregator-signer.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(5f): [finished] writing file "/sysroot/var/opt/openshift/tls/aggregator-signer.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(60): [started] writing file "/sysroot/var/opt/openshift/tls/aggregator-signer.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(60): [finished] writing file "/sysroot/var/opt/openshift/tls/aggregator-signer.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(61): [started] writing file "/sysroot/var/opt/openshift/tls/apiserver-proxy.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(61): [finished] writing file "/sysroot/var/opt/openshift/tls/apiserver-proxy.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(62): [started] writing file "/sysroot/var/opt/openshift/tls/apiserver-proxy.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(62): [finished] writing file "/sysroot/var/opt/openshift/tls/apiserver-proxy.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(63): [started] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-lb-ca-bundle.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(63): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-lb-ca-bundle.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(64): [started] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-lb-server.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(64): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-lb-server.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(65): [started] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-lb-server.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(65): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-lb-server.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(66): [started] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-internal-lb-server.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(66): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-internal-lb-server.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(67): [started] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-internal-lb-server.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(67): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-internal-lb-server.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(68): [started] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-lb-signer.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(68): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-lb-signer.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(69): [started] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-lb-signer.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(69): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-lb-signer.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(6a): [started] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-localhost-ca-bundle.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(6a): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-localhost-ca-bundle.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(6b): [started] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-localhost-server.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(6b): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-localhost-server.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(6c): [started] writing file "/sysroot/var/opt/openshift/tls/machine-config-server.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(6c): [finished] writing file "/sysroot/var/opt/openshift/tls/machine-config-server.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(6d): [started] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-localhost-signer.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(6d): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-localhost-signer.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(6e): [started] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-localhost-signer.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(6e): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-localhost-signer.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(6f): [started] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-service-network-ca-bundle.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(6f): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-service-network-ca-bundle.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(70): [started] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-service-network-server.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(70): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-service-network-server.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(71): [started] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-service-network-server.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(71): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-service-network-server.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(72): [started] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-service-network-signer.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(72): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-service-network-signer.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(73): [started] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-service-network-signer.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(73): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-service-network-signer.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(74): [started] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-complete-server-ca-bundle.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(74): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-complete-server-ca-bundle.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(75): [started] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-complete-client-ca-bundle.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(75): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-complete-client-ca-bundle.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(76): [started] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-to-kubelet-ca-bundle.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(76): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-to-kubelet-ca-bundle.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(77): [started] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-to-kubelet-client.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(77): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-to-kubelet-client.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(78): [started] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-to-kubelet-client.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(78): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-to-kubelet-client.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(79): [started] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-to-kubelet-signer.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(79): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-to-kubelet-signer.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(7a): [started] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-to-kubelet-signer.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(7a): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-apiserver-to-kubelet-signer.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(7b): [started] writing file "/sysroot/var/opt/openshift/tls/kube-control-plane-ca-bundle.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(7b): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-control-plane-ca-bundle.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(7c): [started] writing file "/sysroot/var/opt/openshift/tls/kube-control-plane-kube-controller-manager-client.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(7c): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-control-plane-kube-controller-manager-client.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(7d): [started] writing file "/sysroot/var/opt/openshift/tls/kube-control-plane-kube-controller-manager-client.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(7d): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-control-plane-kube-controller-manager-client.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(7e): [started] writing file "/sysroot/var/opt/openshift/tls/kube-control-plane-kube-scheduler-client.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(7e): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-control-plane-kube-scheduler-client.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(7f): [started] writing file "/sysroot/var/opt/openshift/tls/kube-control-plane-kube-scheduler-client.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(7f): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-control-plane-kube-scheduler-client.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(80): [started] writing file "/sysroot/var/opt/openshift/tls/kube-control-plane-signer.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(80): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-control-plane-signer.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(81): [started] writing file "/sysroot/var/opt/openshift/tls/kube-control-plane-signer.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(81): [finished] writing file "/sysroot/var/opt/openshift/tls/kube-control-plane-signer.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(82): [started] writing file "/sysroot/var/opt/openshift/tls/kubelet-bootstrap-kubeconfig-ca-bundle.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(82): [finished] writing file "/sysroot/var/opt/openshift/tls/kubelet-bootstrap-kubeconfig-ca-bundle.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(83): [started] writing file "/sysroot/var/opt/openshift/tls/kubelet-client-ca-bundle.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(83): [finished] writing file "/sysroot/var/opt/openshift/tls/kubelet-client-ca-bundle.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(84): [started] writing file "/sysroot/var/opt/openshift/tls/kubelet-client.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(84): [finished] writing file "/sysroot/var/opt/openshift/tls/kubelet-client.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(85): [started] writing file "/sysroot/var/opt/openshift/tls/kubelet-client.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(85): [finished] writing file "/sysroot/var/opt/openshift/tls/kubelet-client.crt" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(86): [started] writing file "/sysroot/var/opt/openshift/tls/kubelet-signer.key" Jan 16 20:35:15 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(86): [finished] writing file "/sysroot/var/opt/openshift/tls/kubelet-signer.key" Jan 16 20:35:24 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(87): [started] writing file "/sysroot/var/opt/openshift/tls/kubelet-signer.crt" Jan 16 20:35:24 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(87): [finished] writing file "/sysroot/var/opt/openshift/tls/kubelet-signer.crt" Jan 16 20:35:24 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(88): [started] writing file "/sysroot/var/opt/openshift/tls/kubelet-serving-ca-bundle.crt" Jan 16 20:35:24 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(88): [finished] writing file "/sysroot/var/opt/openshift/tls/kubelet-serving-ca-bundle.crt" Jan 16 20:35:24 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(89): [started] writing file "/sysroot/etc/pki/ca-trust/source/anchors/ca.crt" Jan 16 20:35:25 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(89): [finished] writing file "/sysroot/etc/pki/ca-trust/source/anchors/ca.crt" Jan 16 20:35:25 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(8a): [started] writing file "/sysroot/var/opt/metal3/auth/ironic-rpc/auth-config" Jan 16 20:35:25 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(8a): [finished] writing file "/sysroot/var/opt/metal3/auth/ironic-rpc/auth-config" Jan 16 20:35:25 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(8b): [started] writing file "/sysroot/var/opt/metal3/auth/ironic/auth-config" Jan 16 20:35:25 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(8b): [finished] writing file "/sysroot/var/opt/metal3/auth/ironic/auth-config" Jan 16 20:35:25 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(8c): [started] writing file "/sysroot/var/opt/metal3/auth/ironic/password" Jan 16 20:35:25 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(8c): [finished] writing file "/sysroot/var/opt/metal3/auth/ironic/password" Jan 16 20:35:25 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(8d): [started] writing file "/sysroot/var/opt/metal3/auth/ironic/username" Jan 16 20:35:25 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(8d): [finished] writing file "/sysroot/var/opt/metal3/auth/ironic/username" Jan 16 20:35:25 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(8e): [started] writing file "/sysroot/var/opt/metal3/auth/ironic-inspector/auth-config" Jan 16 20:35:25 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(8e): [finished] writing file "/sysroot/var/opt/metal3/auth/ironic-inspector/auth-config" Jan 16 20:35:25 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(8f): [started] writing file "/sysroot/var/opt/metal3/auth/ironic-inspector/password" Jan 16 20:35:25 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(8f): [finished] writing file "/sysroot/var/opt/metal3/auth/ironic-inspector/password" Jan 16 20:35:25 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(90): [started] writing file "/sysroot/var/opt/metal3/auth/ironic-inspector/username" Jan 16 20:35:25 localhost ignition[1179]: files: createFilesystemsFiles: createFiles: op(90): [finished] writing file "/sysroot/var/opt/metal3/auth/ironic-inspector/username" Jan 16 20:35:25 localhost ignition[1179]: files: op(91): [started] processing unit "approve-csr.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(91): op(92): [started] writing unit "approve-csr.service" at "/sysroot/etc/systemd/system/approve-csr.service" Jan 16 20:35:28 localhost systemd[1]: Finished Ignition (files). Jan 16 20:35:25 localhost ignition[1179]: files: op(91): op(92): [finished] writing unit "approve-csr.service" at "/sysroot/etc/systemd/system/approve-csr.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(91): [finished] processing unit "approve-csr.service" Jan 16 20:35:28 localhost systemd[1]: Starting CoreOS Boot Edit... Jan 16 20:35:25 localhost ignition[1179]: files: op(93): [started] processing unit "bootkube.service" Jan 16 20:35:28 localhost systemd[1]: Starting CoreOS Post Ignition Checks... Jan 16 20:35:25 localhost ignition[1179]: files: op(93): op(94): [started] writing unit "bootkube.service" at "/sysroot/etc/systemd/system/bootkube.service" Jan 16 20:35:28 localhost systemd[1]: CoreOS Propagate Multipath Configuration was skipped because of an unmet condition check (ConditionKernelCommandLine=rd.multipath=default). Jan 16 20:35:25 localhost ignition[1179]: files: op(93): op(94): [finished] writing unit "bootkube.service" at "/sysroot/etc/systemd/system/bootkube.service" Jan 16 20:35:28 localhost systemd-journald[305]: Missed 362 kernel messages Jan 16 20:35:28 localhost kernel: EXT4-fs (vda3): mounted filesystem with ordered data mode. Quota mode: none. Jan 16 20:35:28 localhost systemd[1]: Finish FIPS mode setup was skipped because of an unmet condition check (ConditionKernelCommandLine=fips). Jan 16 20:35:25 localhost ignition[1179]: files: op(93): [finished] processing unit "bootkube.service" Jan 16 20:35:28 localhost systemd[1]: Finished CoreOS Post Ignition Checks. Jan 16 20:35:25 localhost ignition[1179]: files: op(95): [started] processing unit "chown-gatewayd-key.service" Jan 16 20:35:28 localhost coreos-boot-edit[1355]: Injected kernel arguments into BLS: root=UUID=45961086-2073-4e1e-9449-be5affdc08c1 rw rootflags=prjquota Jan 16 20:35:25 localhost ignition[1179]: files: op(95): op(96): [started] writing unit "chown-gatewayd-key.service" at "/sysroot/etc/systemd/system/chown-gatewayd-key.service" Jan 16 20:35:28 localhost coreos-boot-edit[1344]: Prepared rootmap Jan 16 20:35:25 localhost ignition[1179]: files: op(95): op(96): [finished] writing unit "chown-gatewayd-key.service" at "/sysroot/etc/systemd/system/chown-gatewayd-key.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(95): [finished] processing unit "chown-gatewayd-key.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(97): [started] processing unit "crio-configure.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(97): op(98): [started] writing unit "crio-configure.service" at "/sysroot/etc/systemd/system/crio-configure.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(97): op(98): [finished] writing unit "crio-configure.service" at "/sysroot/etc/systemd/system/crio-configure.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(97): [finished] processing unit "crio-configure.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(99): [started] processing unit "kubelet.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(99): op(9a): [started] writing unit "kubelet.service" at "/sysroot/etc/systemd/system/kubelet.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(99): op(9a): [finished] writing unit "kubelet.service" at "/sysroot/etc/systemd/system/kubelet.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(99): [finished] processing unit "kubelet.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(9b): [started] processing unit "progress.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(9b): op(9c): [started] writing unit "progress.service" at "/sysroot/etc/systemd/system/progress.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(9b): op(9c): [finished] writing unit "progress.service" at "/sysroot/etc/systemd/system/progress.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(9b): [finished] processing unit "progress.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(9d): [started] processing unit "release-image-pivot.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(9d): [finished] processing unit "release-image-pivot.service" Jan 16 20:35:29 localhost systemd-journald[305]: Missed 23 kernel messages Jan 16 20:35:29 localhost kernel: EXT4-fs (vda3): unmounting filesystem. Jan 16 20:35:29 localhost systemd[1]: Finished CoreOS Boot Edit. Jan 16 20:35:25 localhost ignition[1179]: files: op(9e): [started] processing unit "release-image.service" Jan 16 20:35:29 localhost systemd[1]: Reached target Ignition Boot Disk Setup. Jan 16 20:35:25 localhost ignition[1179]: files: op(9e): op(9f): [started] writing unit "release-image.service" at "/sysroot/etc/systemd/system/release-image.service" Jan 16 20:35:29 localhost systemd[1]: Reached target Ignition Complete. Jan 16 20:35:25 localhost ignition[1179]: files: op(9e): op(9f): [finished] writing unit "release-image.service" at "/sysroot/etc/systemd/system/release-image.service" Jan 16 20:35:29 localhost systemd[1]: Starting Mountpoints Configured in the Real Root... Jan 16 20:35:25 localhost ignition[1179]: files: op(9e): [finished] processing unit "release-image.service" Jan 16 20:35:29 localhost systemd[1]: Stopping Device-Mapper Multipath Device Controller... Jan 16 20:35:25 localhost ignition[1179]: files: op(a0): [started] processing unit "systemd-journal-gatewayd.service" Jan 16 20:35:29 localhost multipathd[608]: exit (signal) Jan 16 20:35:29 localhost multipathd[608]: --------shut down------- Jan 16 20:35:25 localhost ignition[1179]: files: op(a0): op(a1): [started] writing systemd drop-in "certs.conf" at "/sysroot/etc/systemd/system/systemd-journal-gatewayd.service.d/certs.conf" Jan 16 20:35:25 localhost ignition[1179]: files: op(a0): op(a1): [finished] writing systemd drop-in "certs.conf" at "/sysroot/etc/systemd/system/systemd-journal-gatewayd.service.d/certs.conf" Jan 16 20:35:25 localhost ignition[1179]: files: op(a0): [finished] processing unit "systemd-journal-gatewayd.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(a2): [started] processing unit "systemd-journal-gatewayd.socket" Jan 16 20:35:25 localhost ignition[1179]: files: op(a2): [finished] processing unit "systemd-journal-gatewayd.socket" Jan 16 20:35:25 localhost ignition[1179]: files: op(a3): [started] processing unit "zincati.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(a3): op(a4): [started] writing systemd drop-in "okd-machine-os-disabled.conf" at "/sysroot/etc/systemd/system/zincati.service.d/okd-machine-os-disabled.conf" Jan 16 20:35:25 localhost ignition[1179]: files: op(a3): op(a4): [finished] writing systemd drop-in "okd-machine-os-disabled.conf" at "/sysroot/etc/systemd/system/zincati.service.d/okd-machine-os-disabled.conf" Jan 16 20:35:25 localhost ignition[1179]: files: op(a3): [finished] processing unit "zincati.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(a5): [started] processing unit "build-ironic-env.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(a5): op(a6): [started] writing unit "build-ironic-env.service" at "/sysroot/etc/systemd/system/build-ironic-env.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(a5): op(a6): [finished] writing unit "build-ironic-env.service" at "/sysroot/etc/systemd/system/build-ironic-env.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(a5): [finished] processing unit "build-ironic-env.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(a7): [started] processing unit "extract-machine-os.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(a7): op(a8): [started] writing unit "extract-machine-os.service" at "/sysroot/etc/systemd/system/extract-machine-os.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(a7): op(a8): [finished] writing unit "extract-machine-os.service" at "/sysroot/etc/systemd/system/extract-machine-os.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(a7): [finished] processing unit "extract-machine-os.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(a9): [started] processing unit "master-bmh-update.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(a9): op(aa): [started] writing unit "master-bmh-update.service" at "/sysroot/etc/systemd/system/master-bmh-update.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(a9): op(aa): [finished] writing unit "master-bmh-update.service" at "/sysroot/etc/systemd/system/master-bmh-update.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(a9): [finished] processing unit "master-bmh-update.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(ab): [started] processing unit "provisioning-interface.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(ab): op(ac): [started] writing unit "provisioning-interface.service" at "/sysroot/etc/systemd/system/provisioning-interface.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(ab): op(ac): [finished] writing unit "provisioning-interface.service" at "/sysroot/etc/systemd/system/provisioning-interface.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(ab): [finished] processing unit "provisioning-interface.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(ad): [started] processing unit "wait-iptables-init.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(ad): op(ae): [started] writing unit "wait-iptables-init.service" at "/sysroot/etc/systemd/system/wait-iptables-init.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(ad): op(ae): [finished] writing unit "wait-iptables-init.service" at "/sysroot/etc/systemd/system/wait-iptables-init.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(ad): [finished] processing unit "wait-iptables-init.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(af): [started] setting preset to enabled for "approve-csr.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(af): [finished] setting preset to enabled for "approve-csr.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(b0): [started] setting preset to enabled for "chown-gatewayd-key.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(b0): [finished] setting preset to enabled for "chown-gatewayd-key.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(b1): [started] setting preset to enabled for "kubelet.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(b1): [finished] setting preset to enabled for "kubelet.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(b2): [started] setting preset to enabled for "master-bmh-update.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(b2): [finished] setting preset to enabled for "master-bmh-update.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(b3): [started] setting preset to enabled for "progress.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(b3): [finished] setting preset to enabled for "progress.service" Jan 16 20:35:25 localhost ignition[1179]: files: op(b4): [started] setting preset to enabled for "systemd-journal-gatewayd.socket" Jan 16 20:35:25 localhost ignition[1179]: files: op(b4): [finished] setting preset to enabled for "systemd-journal-gatewayd.socket" Jan 16 20:35:25 localhost ignition[1179]: files: createResultFile: createFiles: op(b5): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 16 20:35:25 localhost ignition[1179]: files: createResultFile: createFiles: op(b5): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 16 20:35:25 localhost ignition[1179]: files: op(b6): [started] relabeling 172 patterns Jan 16 20:35:25 localhost ignition[1179]: files: op(b6): executing: "setfiles" "-vF0" "-r" "/sysroot" "/sysroot/etc/selinux/targeted/contexts/files/file_contexts" "-f" "-" Jan 16 20:35:26 localhost ignition[1179]: files: op(b6): [finished] relabeling 172 patterns Jan 16 20:35:26 localhost ignition[1179]: files: files passed Jan 16 20:35:26 localhost ignition[1179]: Ignition finished successfully Jan 16 20:35:31 localhost systemd[1]: multipathd.service: Deactivated successfully. Jan 16 20:35:31 localhost systemd[1]: Stopped Device-Mapper Multipath Device Controller. Jan 16 20:35:31 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 16 20:35:31 localhost systemd[1]: Finished Mountpoints Configured in the Real Root. Jan 16 20:35:31 localhost systemd[1]: Reached target Initrd File Systems. Jan 16 20:35:31 localhost systemd[1]: Reached target Initrd Default Target. Jan 16 20:35:31 localhost systemd[1]: Starting dracut mount hook... Jan 16 20:35:31 localhost systemd[1]: Finished dracut mount hook. Jan 16 20:35:31 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook... Jan 16 20:35:32 localhost dracut-pre-pivot[1400]: 925.761684 | /etc/multipath.conf does not exist, blacklisting all devices. Jan 16 20:35:32 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook. Jan 16 20:35:32 localhost dracut-pre-pivot[1400]: 925.765579 | You can run "/sbin/mpathconf --enable" to create Jan 16 20:35:32 localhost dracut-pre-pivot[1400]: 925.765618 | /etc/multipath.conf. See man mpathconf(8) for more details Jan 16 20:35:32 localhost systemd[1]: Workaround dracut FIPS unmounting /boot was skipped because of an unmet condition check (ConditionPathExists=/run/ostree-live). Jan 16 20:35:32 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons... Jan 16 20:35:32 localhost systemd[1]: Stopped target Network. Jan 16 20:35:32 localhost systemd[1]: Stopped target Remote Encrypted Volumes. Jan 16 20:35:32 localhost systemd[1]: Stopped target Timer Units. Jan 16 20:35:32 localhost systemd[1]: dbus.socket: Deactivated successfully. Jan 16 20:35:32 localhost systemd[1]: Closed D-Bus System Message Bus Socket. Jan 16 20:35:32 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 16 20:35:32 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook. Jan 16 20:35:32 localhost systemd[1]: Stopped target Initrd Default Target. Jan 16 20:35:32 localhost systemd[1]: Stopped target Ignition Complete. Jan 16 20:35:32 localhost systemd[1]: Stopped target Ignition Boot Disk Setup. Jan 16 20:35:32 localhost systemd[1]: Stopped target Initrd Root Device. Jan 16 20:35:32 localhost systemd[1]: Stopped target Initrd /usr File System. Jan 16 20:35:32 localhost systemd[1]: Stopped target Remote File Systems. Jan 16 20:35:32 localhost systemd[1]: Stopped target Preparation for Remote File Systems. Jan 16 20:35:32 localhost systemd[1]: coreos-boot-edit.service: Deactivated successfully. Jan 16 20:35:32 localhost systemd[1]: Stopped CoreOS Boot Edit. Jan 16 20:35:32 localhost systemd[1]: coreos-post-ignition-checks.service: Deactivated successfully. Jan 16 20:35:32 localhost systemd[1]: Stopped CoreOS Post Ignition Checks. Jan 16 20:35:32 localhost systemd[1]: coreos-touch-run-agetty.service: Deactivated successfully. Jan 16 20:35:32 localhost systemd[1]: Stopped CoreOS: Touch /run/agetty.reload. Jan 16 20:35:33 localhost systemd[1]: dracut-mount.service: Deactivated successfully. Jan 16 20:35:33 localhost systemd[1]: Stopped dracut mount hook. Jan 16 20:35:33 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 16 20:35:33 localhost systemd[1]: Stopped dracut pre-mount hook. Jan 16 20:35:33 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 16 20:35:33 localhost systemd[1]: Stopped dracut initqueue hook. Jan 16 20:35:33 localhost systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 16 20:35:33 localhost systemd[1]: Stopped Ignition (fetch-offline). Jan 16 20:35:33 localhost systemd[1]: ignition-fetch-offline.service: Consumed 13min 43.009s CPU time. Jan 16 20:35:33 localhost systemd[1]: coreos-ignition-setup-user.service: Deactivated successfully. Jan 16 20:35:33 localhost systemd[1]: Stopped CoreOS Ignition User Config Setup. Jan 16 20:35:33 localhost systemd[1]: ignition-files.service: Deactivated successfully. Jan 16 20:35:33 localhost systemd[1]: Stopped Ignition (files). Jan 16 20:35:33 localhost systemd[1]: ignition-files.service: Consumed 1.519s CPU time. Jan 16 20:35:33 localhost systemd[1]: ignition-ostree-check-rootfs-size.service: Deactivated successfully. Jan 16 20:35:33 localhost systemd[1]: Stopped Ignition OSTree: Check Root Filesystem Size. Jan 16 20:35:33 localhost systemd[1]: ignition-ostree-populate-var.service: Deactivated successfully. Jan 16 20:35:33 localhost systemd[1]: Stopped Populate OSTree /var. Jan 16 20:35:33 localhost systemd[1]: ignition-ostree-populate-var.service: Consumed 4.023s CPU time. Jan 16 20:35:33 localhost systemd[1]: Stopping Ignition (mount)... Jan 16 20:35:33 localhost systemd[1]: ignition-ostree-transposefs-autosave-xfs.service: Deactivated successfully. Jan 16 20:35:33 localhost systemd[1]: Stopped Ignition OSTree: Autosave XFS Rootfs Partition. Jan 16 20:35:33 localhost systemd[1]: ignition-ostree-growfs.service: Deactivated successfully. Jan 16 20:35:33 localhost systemd[1]: Stopped Ignition OSTree: Grow Root Filesystem. Jan 16 20:35:33 localhost ignition[1406]: Ignition 2.16.2 Jan 16 20:35:33 localhost systemd[1]: ignition-ostree-uuid-root.service: Deactivated successfully. Jan 16 20:35:33 localhost ignition[1406]: Stage: umount Jan 16 20:35:33 localhost systemd[1]: Stopped Ignition OSTree: Regenerate Filesystem UUID (root). Jan 16 20:35:33 localhost ignition[1406]: reading system config file "/usr/lib/ignition/base.d/00-core.ign" Jan 16 20:35:33 localhost systemd[1]: ignition-mount.service: Deactivated successfully. Jan 16 20:35:33 localhost ignition[1406]: parsing config with SHA512: ff6a5153be363997e4d5d3ea8cc4048373a457c48c4a5b134a08a30aacd167c1e0f099f0bdf1e24c99ad180628cd02b767b863b5fe3a8fce3fe1886847eb8e2e Jan 16 20:35:33 localhost systemd[1]: Stopped Ignition (mount). Jan 16 20:35:33 localhost ignition[1406]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 16 20:35:33 localhost ignition[1406]: umount: umount passed Jan 16 20:35:33 localhost ignition[1406]: Ignition finished successfully Jan 16 20:35:34 localhost systemd[1]: coreos-ignition-unique-boot.service: Deactivated successfully. Jan 16 20:35:34 localhost systemd[1]: Stopped CoreOS Ensure Unique Boot Filesystem. Jan 16 20:35:34 localhost systemd[1]: Unmount Live /var if Persistent /var Is Configured was skipped because of an unmet condition check (ConditionPathExists=/run/ostree-live). Jan 16 20:35:34 localhost systemd[1]: Stopping CoreOS Tear Down Initramfs... Jan 16 20:35:34 localhost systemd[1]: ignition-disks.service: Deactivated successfully. Jan 16 20:35:34 localhost coreos-teardown-initramfs[1414]: info: taking down network device: ens3 Jan 16 20:35:34 localhost systemd[1]: Stopped Ignition (disks). Jan 16 20:35:34 localhost coreos-teardown-initramfs[1426]: RTNETLINK answers: Operation not supported Jan 16 20:35:34 localhost systemd[1]: Stopping Mount OSTree /var... Jan 16 20:35:34 localhost ignition-ostree-mount-var[1429]: Unmounting /sysroot/var Jan 16 20:35:34 localhost coreos-teardown-initramfs[1414]: info: taking down network device: ens4 Jan 16 20:35:34 localhost systemd[1]: Stopping Ignition OSTree: Detect Partition Transposition... Jan 16 20:35:34 localhost coreos-teardown-initramfs[1432]: RTNETLINK answers: Operation not supported Jan 16 20:35:34 localhost systemd[1]: rhcos-fail-boot-for-legacy-luks-config.service: Deactivated successfully. Jan 16 20:35:34 localhost coreos-teardown-initramfs[1414]: info: flushing all routing Jan 16 20:35:34 localhost coreos-teardown-initramfs[1414]: info: no initramfs hostname information to propagate Jan 16 20:35:34 localhost coreos-teardown-initramfs[1414]: info: networking config is defined in the real root Jan 16 20:35:34 localhost coreos-teardown-initramfs[1414]: info: will not attempt to propagate initramfs networking Jan 16 20:35:34 localhost systemd[1]: Stopped RHCOS Check For Legacy LUKS Configuration. Jan 16 20:35:34 localhost systemd[1]: Stopped target Basic System. Jan 16 20:35:34 localhost systemd[1]: Stopped target Path Units. Jan 16 20:35:34 localhost systemd[1]: Stopped target Slice Units. Jan 16 20:35:34 localhost systemd[1]: Stopped target Socket Units. Jan 16 20:35:34 localhost systemd[1]: Stopped target System Initialization. Jan 16 20:35:34 localhost systemd[1]: Stopped target Local Encrypted Volumes. Jan 16 20:35:34 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 16 20:35:34 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch. Jan 16 20:35:34 localhost systemd[1]: Stopped target Local Encrypted Volumes (Pre). Jan 16 20:35:34 localhost systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 16 20:35:34 localhost systemd[1]: Stopped Forward Password Requests to Clevis Directory Watch. Jan 16 20:35:35 localhost systemd[1]: Stopped target Local File Systems. Jan 16 20:35:35 localhost systemd[1]: Stopped target Preparation for Local File Systems. Jan 16 20:35:35 localhost systemd[1]: Stopped target Swaps. Jan 16 20:35:35 localhost systemd[1]: Acquire Live PXE rootfs Image was skipped because of an unmet condition check (ConditionPathExists=/run/ostree-live). Jan 16 20:35:35 localhost systemd[1]: rhcos-fips.service: Deactivated successfully. Jan 16 20:35:35 localhost systemd[1]: Stopped Check for FIPS mode. Jan 16 20:35:35 localhost systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 16 20:35:35 localhost systemd[1]: Stopped Ignition (kargs). Jan 16 20:35:35 localhost systemd[1]: coreos-copy-firstboot-network.service: Deactivated successfully. Jan 16 20:35:35 localhost systemd[1]: Stopped Copy CoreOS Firstboot Networking Config. Jan 16 20:35:35 localhost systemd[1]: ignition-ostree-uuid-boot.service: Deactivated successfully. Jan 16 20:35:35 localhost systemd[1]: Stopped Ignition OSTree: Regenerate Filesystem UUID (boot). Jan 16 20:35:35 localhost systemd[1]: coreos-gpt-setup.service: Deactivated successfully. Jan 16 20:35:35 localhost systemd[1]: Stopped Generate New UUID For Boot Disk GPT. Jan 16 20:35:35 localhost systemd[1]: coreos-unique-boot.service: Deactivated successfully. Jan 16 20:35:35 localhost systemd[1]: Stopped Ensure Unique `boot` Filesystem Label. Jan 16 20:35:35 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 20:35:35 localhost systemd[1]: Stopped Apply Kernel Variables. Jan 16 20:35:35 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 16 20:35:35 localhost systemd[1]: Stopped Load Kernel Modules. Jan 16 20:35:35 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 16 20:35:35 localhost systemd[1]: Stopped Create Volatile Files and Directories. Jan 16 20:35:35 localhost systemd[1]: systemd-udev-settle.service: Deactivated successfully. Jan 16 20:35:35 localhost systemd[1]: Stopped Wait for udev To Complete Device Initialization. Jan 16 20:35:35 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 16 20:35:35 localhost systemd[1]: Stopped Coldplug All udev Devices. Jan 16 20:35:35 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 16 20:35:35 localhost systemd[1]: Stopped dracut pre-trigger hook. Jan 16 20:35:35 localhost systemd[1]: sysroot-var.mount: Deactivated successfully. Jan 16 20:35:35 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 16 20:35:35 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 16 20:35:35 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 16 20:35:35 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons. Jan 16 20:35:36 localhost systemd[1]: coreos-teardown-initramfs.service: Deactivated successfully. Jan 16 20:35:36 localhost systemd[1]: Stopped CoreOS Tear Down Initramfs. Jan 16 20:35:36 localhost systemd[1]: ignition-ostree-mount-var.service: Deactivated successfully. Jan 16 20:35:36 localhost systemd[1]: Stopped Mount OSTree /var. Jan 16 20:35:36 localhost systemd[1]: ignition-ostree-transposefs-detect.service: Deactivated successfully. Jan 16 20:35:36 localhost systemd[1]: Stopped Ignition OSTree: Detect Partition Transposition. Jan 16 20:35:36 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files... Jan 16 20:35:36 localhost systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 16 20:35:36 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files. Jan 16 20:35:36 localhost systemd[1]: systemd-udevd.service: Consumed 6.285s CPU time. Jan 16 20:35:36 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 16 20:35:36 localhost systemd[1]: Closed udev Control Socket. Jan 16 20:35:36 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 16 20:35:36 localhost systemd[1]: Closed udev Kernel Socket. Jan 16 20:35:36 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 16 20:35:36 localhost systemd[1]: Stopped dracut pre-udev hook. Jan 16 20:35:36 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 16 20:35:36 localhost systemd[1]: Stopped dracut cmdline hook. Jan 16 20:35:36 localhost systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jan 16 20:35:36 localhost systemd[1]: Stopped Afterburn Initrd Setup Network Kernel Arguments. Jan 16 20:35:36 localhost systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 20:35:36 localhost systemd[1]: Stopped dracut ask for additional cmdline parameters. Jan 16 20:35:36 localhost systemd[1]: Starting Cleanup udev Database... Jan 16 20:35:36 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 16 20:35:36 localhost systemd[1]: Stopped Create Static Device Nodes in /dev. Jan 16 20:35:36 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 16 20:35:36 localhost systemd[1]: Stopped Create List of Static Device Nodes. Jan 16 20:35:36 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully. Jan 16 20:35:36 localhost systemd[1]: Stopped Create System Users. Jan 16 20:35:36 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 20:35:36 localhost systemd[1]: Stopped Setup Virtual Console. Jan 16 20:35:36 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 16 20:35:36 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully. Jan 16 20:35:37 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 16 20:35:37 localhost systemd[1]: Finished Cleanup udev Database. Jan 16 20:35:37 localhost systemd[1]: Reached target Switch Root. Jan 16 20:35:37 localhost systemd[1]: Starting Switch Root... Jan 16 20:35:37 localhost systemd[1]: Switching root. Jan 16 20:35:37 localhost systemd-journald[305]: Journal stopped Jan 16 20:35:43 localhost ignition[1179]: files: op(9d): [finished] processing unit "release-image-pivot.service" Jan 16 20:35:43 localhost systemd[1]: Finished CoreOS Boot Edit. Jan 16 20:35:43 localhost ignition[1179]: files: op(9e): [started] processing unit "release-image.service" Jan 16 20:35:43 localhost systemd[1]: Reached target Ignition Boot Disk Setup. Jan 16 20:35:43 localhost ignition[1179]: files: op(9e): op(9f): [started] writing unit "release-image.service" at "/sysroot/etc/systemd/system/release-image.service" Jan 16 20:35:43 localhost systemd[1]: Reached target Ignition Complete. Jan 16 20:35:43 localhost ignition[1179]: files: op(9e): op(9f): [finished] writing unit "release-image.service" at "/sysroot/etc/systemd/system/release-image.service" Jan 16 20:35:43 localhost systemd[1]: Starting Mountpoints Configured in the Real Root... Jan 16 20:35:43 localhost ignition[1179]: files: op(9e): [finished] processing unit "release-image.service" Jan 16 20:35:43 localhost systemd[1]: Stopping Device-Mapper Multipath Device Controller... Jan 16 20:35:43 localhost ignition[1179]: files: op(a0): [started] processing unit "systemd-journal-gatewayd.service" Jan 16 20:35:43 localhost multipathd[608]: exit (signal) Jan 16 20:35:43 localhost multipathd[608]: --------shut down------- Jan 16 20:35:43 localhost ignition[1179]: files: op(a0): op(a1): [started] writing systemd drop-in "certs.conf" at "/sysroot/etc/systemd/system/systemd-journal-gatewayd.service.d/certs.conf" Jan 16 20:35:43 localhost ignition[1179]: files: op(a0): op(a1): [finished] writing systemd drop-in "certs.conf" at "/sysroot/etc/systemd/system/systemd-journal-gatewayd.service.d/certs.conf" Jan 16 20:35:43 localhost ignition[1179]: files: op(a0): [finished] processing unit "systemd-journal-gatewayd.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(a2): [started] processing unit "systemd-journal-gatewayd.socket" Jan 16 20:35:43 localhost ignition[1179]: files: op(a2): [finished] processing unit "systemd-journal-gatewayd.socket" Jan 16 20:35:43 localhost ignition[1179]: files: op(a3): [started] processing unit "zincati.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(a3): op(a4): [started] writing systemd drop-in "okd-machine-os-disabled.conf" at "/sysroot/etc/systemd/system/zincati.service.d/okd-machine-os-disabled.conf" Jan 16 20:35:43 localhost ignition[1179]: files: op(a3): op(a4): [finished] writing systemd drop-in "okd-machine-os-disabled.conf" at "/sysroot/etc/systemd/system/zincati.service.d/okd-machine-os-disabled.conf" Jan 16 20:35:43 localhost ignition[1179]: files: op(a3): [finished] processing unit "zincati.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(a5): [started] processing unit "build-ironic-env.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(a5): op(a6): [started] writing unit "build-ironic-env.service" at "/sysroot/etc/systemd/system/build-ironic-env.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(a5): op(a6): [finished] writing unit "build-ironic-env.service" at "/sysroot/etc/systemd/system/build-ironic-env.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(a5): [finished] processing unit "build-ironic-env.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(a7): [started] processing unit "extract-machine-os.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(a7): op(a8): [started] writing unit "extract-machine-os.service" at "/sysroot/etc/systemd/system/extract-machine-os.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(a7): op(a8): [finished] writing unit "extract-machine-os.service" at "/sysroot/etc/systemd/system/extract-machine-os.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(a7): [finished] processing unit "extract-machine-os.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(a9): [started] processing unit "master-bmh-update.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(a9): op(aa): [started] writing unit "master-bmh-update.service" at "/sysroot/etc/systemd/system/master-bmh-update.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(a9): op(aa): [finished] writing unit "master-bmh-update.service" at "/sysroot/etc/systemd/system/master-bmh-update.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(a9): [finished] processing unit "master-bmh-update.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(ab): [started] processing unit "provisioning-interface.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(ab): op(ac): [started] writing unit "provisioning-interface.service" at "/sysroot/etc/systemd/system/provisioning-interface.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(ab): op(ac): [finished] writing unit "provisioning-interface.service" at "/sysroot/etc/systemd/system/provisioning-interface.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(ab): [finished] processing unit "provisioning-interface.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(ad): [started] processing unit "wait-iptables-init.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(ad): op(ae): [started] writing unit "wait-iptables-init.service" at "/sysroot/etc/systemd/system/wait-iptables-init.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(ad): op(ae): [finished] writing unit "wait-iptables-init.service" at "/sysroot/etc/systemd/system/wait-iptables-init.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(ad): [finished] processing unit "wait-iptables-init.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(af): [started] setting preset to enabled for "approve-csr.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(af): [finished] setting preset to enabled for "approve-csr.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(b0): [started] setting preset to enabled for "chown-gatewayd-key.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(b0): [finished] setting preset to enabled for "chown-gatewayd-key.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(b1): [started] setting preset to enabled for "kubelet.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(b1): [finished] setting preset to enabled for "kubelet.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(b2): [started] setting preset to enabled for "master-bmh-update.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(b2): [finished] setting preset to enabled for "master-bmh-update.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(b3): [started] setting preset to enabled for "progress.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(b3): [finished] setting preset to enabled for "progress.service" Jan 16 20:35:43 localhost ignition[1179]: files: op(b4): [started] setting preset to enabled for "systemd-journal-gatewayd.socket" Jan 16 20:35:43 localhost ignition[1179]: files: op(b4): [finished] setting preset to enabled for "systemd-journal-gatewayd.socket" Jan 16 20:35:43 localhost ignition[1179]: files: createResultFile: createFiles: op(b5): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 16 20:35:43 localhost ignition[1179]: files: createResultFile: createFiles: op(b5): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 16 20:35:43 localhost ignition[1179]: files: op(b6): [started] relabeling 172 patterns Jan 16 20:35:43 localhost ignition[1179]: files: op(b6): [finished] relabeling 172 patterns Jan 16 20:35:43 localhost ignition[1179]: files: files passed Jan 16 20:35:43 localhost ignition[1179]: Ignition finished successfully Jan 16 20:35:43 localhost systemd[1]: multipathd.service: Deactivated successfully. Jan 16 20:35:43 localhost systemd[1]: Stopped Device-Mapper Multipath Device Controller. Jan 16 20:35:43 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 16 20:35:43 localhost systemd[1]: Finished Mountpoints Configured in the Real Root. Jan 16 20:35:43 localhost systemd[1]: Reached target Initrd File Systems. Jan 16 20:35:43 localhost systemd[1]: Reached target Initrd Default Target. Jan 16 20:35:43 localhost systemd[1]: Starting dracut mount hook... Jan 16 20:35:43 localhost systemd[1]: Finished dracut mount hook. Jan 16 20:35:43 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook... Jan 16 20:35:43 localhost dracut-pre-pivot[1400]: 925.761684 | /etc/multipath.conf does not exist, blacklisting all devices. Jan 16 20:35:43 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook. Jan 16 20:35:43 localhost dracut-pre-pivot[1400]: 925.765579 | You can run "/sbin/mpathconf --enable" to create Jan 16 20:35:43 localhost dracut-pre-pivot[1400]: 925.765618 | /etc/multipath.conf. See man mpathconf(8) for more details Jan 16 20:35:43 localhost systemd[1]: Workaround dracut FIPS unmounting /boot was skipped because of an unmet condition check (ConditionPathExists=/run/ostree-live). Jan 16 20:35:43 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons... Jan 16 20:35:43 localhost systemd[1]: Stopped target Network. Jan 16 20:35:43 localhost systemd[1]: Stopped target Remote Encrypted Volumes. Jan 16 20:35:43 localhost systemd[1]: Stopped target Timer Units. Jan 16 20:35:43 localhost systemd[1]: dbus.socket: Deactivated successfully. Jan 16 20:35:43 localhost systemd[1]: Closed D-Bus System Message Bus Socket. Jan 16 20:35:43 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 16 20:35:43 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook. Jan 16 20:35:43 localhost systemd[1]: Stopped target Initrd Default Target. Jan 16 20:35:43 localhost systemd[1]: Stopped target Ignition Complete. Jan 16 20:35:43 localhost systemd[1]: Stopped target Ignition Boot Disk Setup. Jan 16 20:35:43 localhost systemd[1]: Stopped target Initrd Root Device. Jan 16 20:35:43 localhost systemd[1]: Stopped target Initrd /usr File System. Jan 16 20:35:43 localhost systemd[1]: Stopped target Remote File Systems. Jan 16 20:35:43 localhost systemd[1]: Stopped target Preparation for Remote File Systems. Jan 16 20:35:43 localhost systemd[1]: coreos-boot-edit.service: Deactivated successfully. Jan 16 20:35:43 localhost systemd[1]: Stopped CoreOS Boot Edit. Jan 16 20:35:43 localhost systemd[1]: coreos-post-ignition-checks.service: Deactivated successfully. Jan 16 20:35:43 localhost systemd[1]: Stopped CoreOS Post Ignition Checks. Jan 16 20:35:43 localhost systemd[1]: coreos-touch-run-agetty.service: Deactivated successfully. Jan 16 20:35:43 localhost systemd[1]: Stopped CoreOS: Touch /run/agetty.reload. Jan 16 20:35:43 localhost systemd[1]: dracut-mount.service: Deactivated successfully. Jan 16 20:35:43 localhost systemd[1]: Stopped dracut mount hook. Jan 16 20:35:43 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 16 20:35:43 localhost systemd[1]: Stopped dracut pre-mount hook. Jan 16 20:35:43 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 16 20:35:43 localhost systemd[1]: Stopped dracut initqueue hook. Jan 16 20:35:43 localhost systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 16 20:35:43 localhost systemd[1]: Stopped Ignition (fetch-offline). Jan 16 20:35:43 localhost systemd[1]: ignition-fetch-offline.service: Consumed 13min 43.009s CPU time. Jan 16 20:35:43 localhost systemd[1]: coreos-ignition-setup-user.service: Deactivated successfully. Jan 16 20:35:43 localhost systemd[1]: Stopped CoreOS Ignition User Config Setup. Jan 16 20:35:43 localhost systemd[1]: ignition-files.service: Deactivated successfully. Jan 16 20:35:43 localhost systemd[1]: Stopped Ignition (files). Jan 16 20:35:43 localhost systemd[1]: ignition-files.service: Consumed 1.519s CPU time. Jan 16 20:35:43 localhost systemd[1]: ignition-ostree-check-rootfs-size.service: Deactivated successfully. Jan 16 20:35:43 localhost systemd[1]: Stopped Ignition OSTree: Check Root Filesystem Size. Jan 16 20:35:43 localhost systemd[1]: ignition-ostree-populate-var.service: Deactivated successfully. Jan 16 20:35:43 localhost systemd[1]: Stopped Populate OSTree /var. Jan 16 20:35:43 localhost systemd[1]: ignition-ostree-populate-var.service: Consumed 4.023s CPU time. Jan 16 20:35:43 localhost systemd[1]: Stopping Ignition (mount)... Jan 16 20:35:43 localhost systemd[1]: ignition-ostree-transposefs-autosave-xfs.service: Deactivated successfully. Jan 16 20:35:43 localhost systemd[1]: Stopped Ignition OSTree: Autosave XFS Rootfs Partition. Jan 16 20:35:43 localhost systemd[1]: ignition-ostree-growfs.service: Deactivated successfully. Jan 16 20:35:43 localhost systemd[1]: Stopped Ignition OSTree: Grow Root Filesystem. Jan 16 20:35:43 localhost ignition[1406]: Ignition 2.16.2 Jan 16 20:35:43 localhost systemd[1]: ignition-ostree-uuid-root.service: Deactivated successfully. Jan 16 20:35:43 localhost ignition[1406]: Stage: umount Jan 16 20:35:43 localhost systemd[1]: Stopped Ignition OSTree: Regenerate Filesystem UUID (root). Jan 16 20:35:43 localhost ignition[1406]: reading system config file "/usr/lib/ignition/base.d/00-core.ign" Jan 16 20:35:43 localhost systemd[1]: ignition-mount.service: Deactivated successfully. Jan 16 20:35:43 localhost systemd[1]: Stopped Ignition (mount). Jan 16 20:35:43 localhost ignition[1406]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 16 20:35:43 localhost ignition[1406]: umount: umount passed Jan 16 20:35:43 localhost ignition[1406]: Ignition finished successfully Jan 16 20:35:43 localhost systemd[1]: coreos-ignition-unique-boot.service: Deactivated successfully. Jan 16 20:35:43 localhost systemd[1]: Stopped CoreOS Ensure Unique Boot Filesystem. Jan 16 20:35:43 localhost systemd[1]: Unmount Live /var if Persistent /var Is Configured was skipped because of an unmet condition check (ConditionPathExists=/run/ostree-live). Jan 16 20:35:43 localhost systemd[1]: Stopping CoreOS Tear Down Initramfs... Jan 16 20:35:43 localhost systemd[1]: ignition-disks.service: Deactivated successfully. Jan 16 20:35:43 localhost coreos-teardown-initramfs[1414]: info: taking down network device: ens3 Jan 16 20:35:43 localhost systemd[1]: Stopped Ignition (disks). Jan 16 20:35:43 localhost coreos-teardown-initramfs[1426]: RTNETLINK answers: Operation not supported Jan 16 20:35:43 localhost systemd[1]: Stopping Mount OSTree /var... Jan 16 20:35:44 localhost ignition-ostree-mount-var[1429]: Unmounting /sysroot/var Jan 16 20:35:44 localhost coreos-teardown-initramfs[1414]: info: taking down network device: ens4 Jan 16 20:35:44 localhost systemd[1]: Stopping Ignition OSTree: Detect Partition Transposition... Jan 16 20:35:44 localhost coreos-teardown-initramfs[1432]: RTNETLINK answers: Operation not supported Jan 16 20:35:44 localhost systemd[1]: rhcos-fail-boot-for-legacy-luks-config.service: Deactivated successfully. Jan 16 20:35:44 localhost coreos-teardown-initramfs[1414]: info: flushing all routing Jan 16 20:35:44 localhost coreos-teardown-initramfs[1414]: info: no initramfs hostname information to propagate Jan 16 20:35:44 localhost coreos-teardown-initramfs[1414]: info: networking config is defined in the real root Jan 16 20:35:44 localhost coreos-teardown-initramfs[1414]: info: will not attempt to propagate initramfs networking Jan 16 20:35:44 localhost systemd[1]: Stopped RHCOS Check For Legacy LUKS Configuration. Jan 16 20:35:44 localhost systemd[1]: Stopped target Basic System. Jan 16 20:35:44 localhost systemd[1]: Stopped target Path Units. Jan 16 20:35:44 localhost systemd[1]: Stopped target Slice Units. Jan 16 20:35:44 localhost systemd[1]: Stopped target Socket Units. Jan 16 20:35:44 localhost systemd[1]: Stopped target System Initialization. Jan 16 20:35:44 localhost systemd[1]: Stopped target Local Encrypted Volumes. Jan 16 20:35:44 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch. Jan 16 20:35:44 localhost systemd[1]: Stopped target Local Encrypted Volumes (Pre). Jan 16 20:35:44 localhost systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Stopped Forward Password Requests to Clevis Directory Watch. Jan 16 20:35:44 localhost systemd[1]: Stopped target Local File Systems. Jan 16 20:35:44 localhost systemd[1]: Stopped target Preparation for Local File Systems. Jan 16 20:35:44 localhost systemd[1]: Stopped target Swaps. Jan 16 20:35:44 localhost systemd[1]: Acquire Live PXE rootfs Image was skipped because of an unmet condition check (ConditionPathExists=/run/ostree-live). Jan 16 20:35:44 localhost systemd[1]: rhcos-fips.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Stopped Check for FIPS mode. Jan 16 20:35:44 localhost systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Stopped Ignition (kargs). Jan 16 20:35:44 localhost systemd[1]: coreos-copy-firstboot-network.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Stopped Copy CoreOS Firstboot Networking Config. Jan 16 20:35:44 localhost systemd[1]: ignition-ostree-uuid-boot.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Stopped Ignition OSTree: Regenerate Filesystem UUID (boot). Jan 16 20:35:44 localhost systemd[1]: coreos-gpt-setup.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Stopped Generate New UUID For Boot Disk GPT. Jan 16 20:35:44 localhost systemd[1]: coreos-unique-boot.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Stopped Ensure Unique `boot` Filesystem Label. Jan 16 20:35:44 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Stopped Apply Kernel Variables. Jan 16 20:35:44 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Stopped Load Kernel Modules. Jan 16 20:35:44 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Stopped Create Volatile Files and Directories. Jan 16 20:35:44 localhost systemd[1]: systemd-udev-settle.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Stopped Wait for udev To Complete Device Initialization. Jan 16 20:35:44 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Stopped Coldplug All udev Devices. Jan 16 20:35:44 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Stopped dracut pre-trigger hook. Jan 16 20:35:44 localhost systemd[1]: sysroot-var.mount: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons. Jan 16 20:35:44 localhost systemd[1]: coreos-teardown-initramfs.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Stopped CoreOS Tear Down Initramfs. Jan 16 20:35:44 localhost systemd[1]: ignition-ostree-mount-var.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Stopped Mount OSTree /var. Jan 16 20:35:44 localhost systemd[1]: ignition-ostree-transposefs-detect.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Stopped Ignition OSTree: Detect Partition Transposition. Jan 16 20:35:44 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files... Jan 16 20:35:44 localhost systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files. Jan 16 20:35:44 localhost systemd[1]: systemd-udevd.service: Consumed 6.285s CPU time. Jan 16 20:35:44 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Closed udev Control Socket. Jan 16 20:35:44 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Closed udev Kernel Socket. Jan 16 20:35:44 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Stopped dracut pre-udev hook. Jan 16 20:35:44 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Stopped dracut cmdline hook. Jan 16 20:35:44 localhost systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Stopped Afterburn Initrd Setup Network Kernel Arguments. Jan 16 20:35:44 localhost systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Stopped dracut ask for additional cmdline parameters. Jan 16 20:35:44 localhost systemd[1]: Starting Cleanup udev Database... Jan 16 20:35:44 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Stopped Create Static Device Nodes in /dev. Jan 16 20:35:44 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Stopped Create List of Static Device Nodes. Jan 16 20:35:44 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Stopped Create System Users. Jan 16 20:35:44 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Stopped Setup Virtual Console. Jan 16 20:35:44 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Finished Cleanup udev Database. Jan 16 20:35:44 localhost systemd[1]: Reached target Switch Root. Jan 16 20:35:44 localhost systemd[1]: Starting Switch Root... Jan 16 20:35:44 localhost systemd[1]: Switching root. Jan 16 20:35:44 localhost systemd-journald[305]: Received SIGTERM from PID 1 (systemd). Jan 16 20:35:44 localhost kernel: audit: type=1404 audit(1705437337.495:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1 Jan 16 20:35:44 localhost kernel: SELinux: policy capability network_peer_controls=1 Jan 16 20:35:44 localhost kernel: SELinux: policy capability open_perms=1 Jan 16 20:35:44 localhost kernel: SELinux: policy capability extended_socket_class=1 Jan 16 20:35:44 localhost kernel: SELinux: policy capability always_check_network=0 Jan 16 20:35:44 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Jan 16 20:35:44 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 16 20:35:44 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Jan 16 20:35:44 localhost kernel: audit: type=1403 audit(1705437337.908:3): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 16 20:35:44 localhost systemd[1]: Successfully loaded SELinux policy in 451.223ms. Jan 16 20:35:44 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 138.784ms. Jan 16 20:35:44 localhost systemd[1]: Configuration file /etc/systemd/system.conf.d/10-default-env.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Jan 16 20:35:44 localhost systemd[1]: systemd 252-14.el9_2.3 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jan 16 20:35:44 localhost systemd[1]: Detected virtualization kvm. Jan 16 20:35:44 localhost systemd[1]: Detected architecture x86-64. Jan 16 20:35:44 localhost systemd[1]: Detected first boot. Jan 16 20:35:44 localhost systemd[1]: Initializing machine ID from VM UUID. Jan 16 20:35:44 localhost quadlet-generator[1459]: Warning: image-customization.container specifies the image "$CUSTOMIZATION_IMAGE" which not a fully qualified image name. This is not ideal for performance and security reasons. See the podman-pull manpage discussion of short-name-aliases.conf for details. Jan 16 20:35:44 localhost quadlet-generator[1459]: Warning: ironic-dnsmasq.container specifies the image "$IRONIC_IMAGE" which not a fully qualified image name. This is not ideal for performance and security reasons. See the podman-pull manpage discussion of short-name-aliases.conf for details. Jan 16 20:35:44 localhost systemd-rc-local-generator[1469]: /etc/rc.d/rc.local is not marked executable, skipping. Jan 16 20:35:44 localhost quadlet-generator[1459]: Warning: ironic-httpd.container specifies the image "$IRONIC_IMAGE" which not a fully qualified image name. This is not ideal for performance and security reasons. See the podman-pull manpage discussion of short-name-aliases.conf for details. Jan 16 20:35:44 localhost quadlet-generator[1459]: Warning: ironic-inspector.container specifies the image "$IRONIC_IMAGE" which not a fully qualified image name. This is not ideal for performance and security reasons. See the podman-pull manpage discussion of short-name-aliases.conf for details. Jan 16 20:35:44 localhost quadlet-generator[1459]: Warning: ironic-ramdisk-logs.container specifies the image "$IRONIC_IMAGE" which not a fully qualified image name. This is not ideal for performance and security reasons. See the podman-pull manpage discussion of short-name-aliases.conf for details. Jan 16 20:35:44 localhost quadlet-generator[1459]: Warning: ironic.container specifies the image "$IRONIC_IMAGE" which not a fully qualified image name. This is not ideal for performance and security reasons. See the podman-pull manpage discussion of short-name-aliases.conf for details. Jan 16 20:35:44 localhost systemd[1]: Populated /etc with preset unit settings. Jan 16 20:35:44 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Stopped Switch Root. Jan 16 20:35:44 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 16 20:35:44 localhost systemd[1]: Created slice Slice /system/getty. Jan 16 20:35:44 localhost systemd[1]: Created slice Slice /system/modprobe. Jan 16 20:35:44 localhost systemd[1]: Created slice Slice /system/serial-getty. Jan 16 20:35:44 localhost systemd[1]: Created slice Slice /system/sshd-keygen. Jan 16 20:35:44 localhost systemd[1]: Created slice Slice /system/systemd-fsck. Jan 16 20:35:44 localhost systemd[1]: Created slice User and Session Slice. Jan 16 20:35:44 localhost systemd[1]: Started Forward Password Requests to Clevis Directory Watch. Jan 16 20:35:44 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Jan 16 20:35:44 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch. Jan 16 20:35:44 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. Jan 16 20:35:44 localhost systemd[1]: Reached target Synchronize afterburn-sshkeys@.service template instances. Jan 16 20:35:44 localhost systemd[1]: Reached target Local Encrypted Volumes (Pre). Jan 16 20:35:44 localhost systemd[1]: Reached target Local Encrypted Volumes. Jan 16 20:35:44 localhost systemd[1]: Stopped target Switch Root. Jan 16 20:35:44 localhost systemd[1]: Stopped target Initrd File Systems. Jan 16 20:35:44 localhost systemd[1]: Stopped target Initrd Root File System. Jan 16 20:35:44 localhost systemd[1]: Reached target Local Integrity Protected Volumes. Jan 16 20:35:44 localhost systemd[1]: Reached target Host and Network Name Lookups. Jan 16 20:35:44 localhost systemd[1]: Reached target Remote Encrypted Volumes. Jan 16 20:35:44 localhost systemd[1]: Reached target Remote File Systems. Jan 16 20:35:44 localhost systemd[1]: Reached target Slice Units. Jan 16 20:35:44 localhost systemd[1]: Reached target Swaps. Jan 16 20:35:44 localhost systemd[1]: Reached target Local Verity Protected Volumes. Jan 16 20:35:44 localhost systemd[1]: Listening on Device-mapper event daemon FIFOs. Jan 16 20:35:44 localhost systemd[1]: Listening on LVM2 poll daemon socket. Jan 16 20:35:44 localhost systemd[1]: multipathd control socket was skipped because of an unmet condition check (ConditionPathExists=/etc/multipath.conf). Jan 16 20:35:44 localhost systemd[1]: Listening on RPCbind Server Activation Socket. Jan 16 20:35:44 localhost systemd[1]: Reached target RPC Port Mapper. Jan 16 20:35:44 localhost systemd[1]: Listening on Process Core Dump Socket. Jan 16 20:35:44 localhost systemd[1]: Listening on initctl Compatibility Named Pipe. Jan 16 20:35:44 localhost systemd[1]: Listening on udev Control Socket. Jan 16 20:35:44 localhost systemd[1]: Listening on udev Kernel Socket. Jan 16 20:35:44 localhost systemd[1]: Mounting Huge Pages File System... Jan 16 20:35:44 localhost systemd[1]: Mounting POSIX Message Queue File System... Jan 16 20:35:44 localhost systemd[1]: Mounting Kernel Debug File System... Jan 16 20:35:44 localhost systemd[1]: Mounting Kernel Trace File System... Jan 16 20:35:44 localhost systemd[1]: Mounting Temporary Directory /tmp... Jan 16 20:35:44 localhost systemd[1]: Starting CoreOS: Set printk To Level 4 (warn)... Jan 16 20:35:44 localhost systemd[1]: coreos-rootflags.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Stopped coreos-rootflags.service. Jan 16 20:35:44 localhost systemd[1]: Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 20:35:44 localhost systemd[1]: Starting Create List of Static Device Nodes... Jan 16 20:35:44 localhost systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... Jan 16 20:35:44 localhost systemd[1]: Starting Load Kernel Module configfs... Jan 16 20:35:44 localhost systemd[1]: Starting Load Kernel Module drm... Jan 16 20:35:44 localhost systemd[1]: Starting Load Kernel Module efi_pstore... Jan 16 20:35:44 localhost systemd[1]: Starting Load Kernel Module fuse... Jan 16 20:35:44 localhost systemd[1]: ostree-prepare-root.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Stopped OSTree Prepare OS/. Jan 16 20:35:44 localhost systemd[1]: Stopped Journal Service. Jan 16 20:35:44 localhost systemd[1]: systemd-journald.service: Consumed 12.574s CPU time. Jan 16 20:35:44 localhost kernel: ACPI: bus type drm_connector registered Jan 16 20:35:44 localhost systemd[1]: Starting Journal Service... Jan 16 20:35:44 localhost systemd[1]: Starting Load Kernel Modules... Jan 16 20:35:44 localhost systemd[1]: Starting Generate network units from Kernel command line... Jan 16 20:35:44 localhost systemd-journald[1517]: Journal started Jan 16 20:35:44 localhost systemd-journald[1517]: Runtime Journal (/run/log/journal/efce106942834b6e8dcf2db4b261dcf3) is 8.0M, max 118.3M, 110.3M free. Jan 16 20:35:41 localhost systemd[1]: Queued start job for default target Graphical Interface. Jan 16 20:35:41 localhost systemd[1]: systemd-journald.service: Deactivated successfully. Jan 16 20:35:41 localhost systemd[1]: systemd-journald.service: Consumed 12.574s CPU time. Jan 16 20:35:43 localhost systemd-modules-load[1518]: Module 'msr' is built in Jan 16 20:35:44 localhost systemd[1]: Starting Remount Root and Kernel File Systems... Jan 16 20:35:44 localhost systemd[1]: Starting Coldplug All udev Devices... Jan 16 20:35:44 localhost systemd[1]: Started Journal Service. Jan 16 20:35:44 localhost systemd[1]: Mounted Huge Pages File System. Jan 16 20:35:44 localhost systemd[1]: Mounted POSIX Message Queue File System. Jan 16 20:35:44 localhost systemd[1]: Mounted Kernel Debug File System. Jan 16 20:35:44 localhost systemd[1]: Mounted Kernel Trace File System. Jan 16 20:35:44 localhost systemd[1]: Mounted Temporary Directory /tmp. Jan 16 20:35:44 localhost systemd[1]: Finished CoreOS: Set printk To Level 4 (warn). Jan 16 20:35:44 localhost systemd[1]: Finished Create List of Static Device Nodes. Jan 16 20:35:44 localhost systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. Jan 16 20:35:44 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Finished Load Kernel Module configfs. Jan 16 20:35:44 localhost systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Finished Load Kernel Module drm. Jan 16 20:35:44 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Finished Load Kernel Module efi_pstore. Jan 16 20:35:44 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 16 20:35:44 localhost systemd[1]: Finished Load Kernel Module fuse. Jan 16 20:35:44 localhost systemd[1]: Finished Load Kernel Modules. Jan 16 20:35:44 localhost systemd[1]: Finished Generate network units from Kernel command line. Jan 16 20:35:44 localhost systemd[1]: Finished Remount Root and Kernel File Systems. Jan 16 20:35:44 localhost systemd[1]: Reached target Preparation for Network. Jan 16 20:35:44 localhost systemd[1]: Mounting FUSE Control File System... Jan 16 20:35:44 localhost systemd[1]: Mounting Kernel Configuration File System... Jan 16 20:35:44 localhost systemd[1]: Special handling of early boot iSCSI sessions was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/iscsi_session). Jan 16 20:35:44 localhost systemd[1]: Starting Rebuild Hardware Database... Jan 16 20:35:44 localhost systemd[1]: Starting Apply Kernel Variables... Jan 16 20:35:44 localhost systemd[1]: Starting Create System Users... Jan 16 20:35:44 localhost systemd[1]: Mounted FUSE Control File System. Jan 16 20:35:44 localhost systemd[1]: Finished Coldplug All udev Devices. Jan 16 20:35:44 localhost systemd[1]: Mounted Kernel Configuration File System. Jan 16 20:35:44 localhost systemd[1]: Finished Apply Kernel Variables. Jan 16 20:35:44 localhost systemd[1]: Starting Wait for udev To Complete Device Initialization... Jan 16 20:35:44 localhost udevadm[1529]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in. Jan 16 20:35:44 localhost systemd-sysusers[1527]: Creating group 'sgx' with GID 991. Jan 16 20:35:44 localhost systemd-sysusers[1527]: Creating group 'systemd-oom' with GID 990. Jan 16 20:35:44 localhost systemd-sysusers[1527]: Creating user 'systemd-oom' (systemd Userspace OOM Killer) with UID 990 and GID 990. Jan 16 20:35:45 localhost systemd[1]: Finished Create System Users. Jan 16 20:35:45 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Jan 16 20:35:45 localhost systemd[1]: Finished Create Static Device Nodes in /dev. Jan 16 20:35:45 localhost systemd[1]: Finished Rebuild Hardware Database. Jan 16 20:35:45 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Jan 16 20:35:45 localhost systemd-udevd[1532]: Using default interface naming scheme 'rhel-9.0'. Jan 16 20:35:46 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Jan 16 20:35:46 localhost systemd[1]: Auto-connect to subsystems on FC-NVME devices found during boot was skipped because of an unmet condition check (ConditionPathExists=/sys/class/fc/fc_udev_device/nvme_discovery). Jan 16 20:35:46 localhost systemd[1]: Starting Load Kernel Module configfs... Jan 16 20:35:46 localhost systemd[1]: Starting Load Kernel Module fuse... Jan 16 20:35:46 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 16 20:35:46 localhost systemd[1]: Finished Load Kernel Module configfs. Jan 16 20:35:46 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 16 20:35:46 localhost systemd[1]: Finished Load Kernel Module fuse. Jan 16 20:35:46 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped. Jan 16 20:35:46 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 16 20:35:46 localhost systemd[1]: Condition check resulted in /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 16 20:35:46 localhost systemd[1]: Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 20:35:46 localhost systemd[1]: Special handling of early boot iSCSI sessions was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/iscsi_session). Jan 16 20:35:46 localhost systemd[1]: Starting Load Kernel Module efi_pstore... Jan 16 20:35:46 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 20:35:46 localhost systemd[1]: Finished Load Kernel Module efi_pstore. Jan 16 20:35:46 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input5 Jan 16 20:35:46 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/5256ed23-0bf9-4f14-8749-ad59e0e9a846 being skipped. Jan 16 20:35:46 localhost kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 16 20:35:46 localhost kernel: cirrus 0000:00:02.0: vgaarb: deactivate vga console Jan 16 20:35:47 localhost kernel: Console: switching to colour dummy device 80x25 Jan 16 20:35:47 localhost kernel: [drm] Initialized cirrus 2.0.0 2019 for 0000:00:02.0 on minor 0 Jan 16 20:35:47 localhost kernel: fbcon: cirrusdrmfb (fb0) is primary device Jan 16 20:35:47 localhost kernel: cirrus 0000:00:02.0: [drm] drm_plane_enable_fb_damage_clips() not called Jan 16 20:35:47 localhost kernel: Console: switching to colour frame buffer device 128x48 Jan 16 20:35:47 localhost kernel: cirrus 0000:00:02.0: [drm] fb0: cirrusdrmfb frame buffer device Jan 16 20:35:47 localhost systemd[1]: Finished Wait for udev To Complete Device Initialization. Jan 16 20:35:47 localhost systemd[1]: Device-Mapper Multipath Device Controller was skipped because of an unmet condition check (ConditionPathExists=/etc/multipath.conf). Jan 16 20:35:47 localhost systemd[1]: Reached target Preparation for Local File Systems. Jan 16 20:35:47 localhost systemd[1]: var.mount: Directory /var to mount over is not empty, mounting anyway. Jan 16 20:35:47 localhost systemd[1]: Mounting /var... Jan 16 20:35:47 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/5256ed23-0bf9-4f14-8749-ad59e0e9a846... Jan 16 20:35:47 localhost systemd[1]: Mounted /var. Jan 16 20:35:47 localhost systemd[1]: Starting CoreOS Populate LVM Devices File... Jan 16 20:35:47 localhost systemd[1]: Starting OSTree Remount OS/ Bind Mounts... Jan 16 20:35:47 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 20:35:47 localhost systemd[1]: Finished OSTree Remount OS/ Bind Mounts. Jan 16 20:35:47 localhost systemd-fsck[1587]: boot: clean, 366/98304 files, 146507/393216 blocks Jan 16 20:35:47 localhost systemd[1]: Starting Flush Journal to Persistent Storage... Jan 16 20:35:47 localhost systemd[1]: Starting Load/Save Random Seed... Jan 16 20:35:47 localhost systemd-journald[1517]: Time spent on flushing to /var/log/journal/efce106942834b6e8dcf2db4b261dcf3 is 69.848ms for 1804 entries. Jan 16 20:35:47 localhost systemd-journald[1517]: System Journal (/var/log/journal/efce106942834b6e8dcf2db4b261dcf3) is 8.0M, max 3.1G, 3.1G free. Jan 16 20:35:47 localhost systemd-journald[1517]: Received client request to flush runtime journal. Jan 16 20:35:47 localhost kernel: EXT4-fs (vda3): mounted filesystem with ordered data mode. Quota mode: none. Jan 16 20:35:47 localhost coreos-populate-lvmdevices[1586]: No LVM devices detected. Exiting. Jan 16 20:35:47 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/5256ed23-0bf9-4f14-8749-ad59e0e9a846. Jan 16 20:35:47 localhost systemd[1]: Finished Load/Save Random Seed. Jan 16 20:35:47 localhost systemd[1]: Mounting CoreOS Dynamic Mount for /boot... Jan 16 20:35:47 localhost systemd[1]: Mounted CoreOS Dynamic Mount for /boot. Jan 16 20:35:47 localhost systemd[1]: Finished CoreOS Populate LVM Devices File. Jan 16 20:35:47 localhost systemd[1]: Reached target Local File Systems. Jan 16 20:35:47 localhost systemd[1]: Starting Run update-ca-trust... Jan 16 20:35:47 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache... Jan 16 20:35:47 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux). Jan 16 20:35:47 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 20:35:47 localhost systemd[1]: Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jan 16 20:35:47 localhost systemd[1]: Starting Automatic Boot Loader Update... Jan 16 20:35:47 localhost systemd[1]: Finished Flush Journal to Persistent Storage. Jan 16 20:35:47 localhost systemd[1]: Starting Create Volatile Files and Directories... Jan 16 20:35:47 localhost bootctl[1603]: Couldn't find EFI system partition, skipping. Jan 16 20:35:47 localhost systemd[1]: Finished Automatic Boot Loader Update. Jan 16 20:35:48 localhost systemd-tmpfiles[1604]: /usr/lib/tmpfiles.d/tmp.conf:12: Duplicate line for path "/var/tmp", ignoring. Jan 16 20:35:48 localhost systemd-tmpfiles[1604]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 16 20:35:48 localhost systemd-tmpfiles[1604]: /usr/lib/tmpfiles.d/var.conf:19: Duplicate line for path "/var/cache", ignoring. Jan 16 20:35:48 localhost systemd-tmpfiles[1604]: /usr/lib/tmpfiles.d/var.conf:21: Duplicate line for path "/var/lib", ignoring. Jan 16 20:35:48 localhost systemd-tmpfiles[1604]: /usr/lib/tmpfiles.d/var.conf:23: Duplicate line for path "/var/spool", ignoring. Jan 16 20:35:48 localhost systemd-tmpfiles[1604]: "/home" already exists and is not a directory. Jan 16 20:35:48 localhost systemd-tmpfiles[1604]: "/srv" already exists and is not a directory. Jan 16 20:35:48 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache. Jan 16 20:35:48 localhost systemd-tmpfiles[1604]: "/root" already exists and is not a directory. Jan 16 20:35:48 localhost systemd[1]: Finished Create Volatile Files and Directories. Jan 16 20:35:48 localhost systemd[1]: Starting Security Auditing Service... Jan 16 20:35:48 localhost systemd[1]: Starting RHEL CoreOS Rebuild SELinux Policy If Necessary... Jan 16 20:35:48 localhost systemd[1]: Starting RHCOS Fix SELinux Labeling For /usr/local/sbin... Jan 16 20:35:48 localhost chcon[1610]: changing security context of '/usr/local/sbin' Jan 16 20:35:48 localhost rhcos-rebuild-selinux-policy[1609]: RHEL_VERSION=9.2Assuming we have new enough ostree Jan 16 20:35:48 localhost auditd[1612]: No plugins found, not dispatching events Jan 16 20:35:48 localhost auditd[1612]: Init complete, auditd 3.0.7 listening for events (startup state enable) Jan 16 20:35:48 localhost systemd[1]: Starting Rebuild Journal Catalog... Jan 16 20:35:48 localhost systemd[1]: Finished RHEL CoreOS Rebuild SELinux Policy If Necessary. Jan 16 20:35:48 localhost sh[1619]: changing security context of '/var/usrlocal/sbin' Jan 16 20:35:48 localhost systemd[1]: Finished RHCOS Fix SELinux Labeling For /usr/local/sbin. Jan 16 20:35:48 localhost systemd[1]: Finished Rebuild Journal Catalog. Jan 16 20:35:48 localhost systemd[1]: Starting Update is Completed... Jan 16 20:35:48 localhost systemd[1]: Finished Update is Completed. Jan 16 20:35:48 localhost augenrules[1634]: No rules Jan 16 20:35:48 localhost augenrules[1634]: enabled 1 Jan 16 20:35:48 localhost augenrules[1634]: failure 1 Jan 16 20:35:48 localhost augenrules[1634]: pid 1612 Jan 16 20:35:48 localhost augenrules[1634]: rate_limit 0 Jan 16 20:35:48 localhost augenrules[1634]: backlog_limit 8192 Jan 16 20:35:48 localhost augenrules[1634]: lost 0 Jan 16 20:35:48 localhost augenrules[1634]: backlog 4 Jan 16 20:35:48 localhost augenrules[1634]: backlog_wait_time 60000 Jan 16 20:35:48 localhost augenrules[1634]: backlog_wait_time_actual 0 Jan 16 20:35:48 localhost augenrules[1634]: enabled 1 Jan 16 20:35:48 localhost augenrules[1634]: failure 1 Jan 16 20:35:48 localhost augenrules[1634]: pid 1612 Jan 16 20:35:48 localhost augenrules[1634]: rate_limit 0 Jan 16 20:35:48 localhost augenrules[1634]: backlog_limit 8192 Jan 16 20:35:48 localhost augenrules[1634]: lost 0 Jan 16 20:35:48 localhost augenrules[1634]: backlog 3 Jan 16 20:35:48 localhost augenrules[1634]: backlog_wait_time 60000 Jan 16 20:35:48 localhost augenrules[1634]: backlog_wait_time_actual 0 Jan 16 20:35:48 localhost augenrules[1634]: enabled 1 Jan 16 20:35:48 localhost augenrules[1634]: failure 1 Jan 16 20:35:48 localhost augenrules[1634]: pid 1612 Jan 16 20:35:48 localhost augenrules[1634]: rate_limit 0 Jan 16 20:35:48 localhost augenrules[1634]: backlog_limit 8192 Jan 16 20:35:48 localhost augenrules[1634]: lost 0 Jan 16 20:35:48 localhost augenrules[1634]: backlog 4 Jan 16 20:35:48 localhost augenrules[1634]: backlog_wait_time 60000 Jan 16 20:35:48 localhost augenrules[1634]: backlog_wait_time_actual 0 Jan 16 20:35:48 localhost systemd[1]: Started Security Auditing Service. Jan 16 20:35:48 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP... Jan 16 20:35:48 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP. Jan 16 20:35:48 localhost systemd[1]: Reached target System Initialization. Jan 16 20:35:48 localhost systemd[1]: Started OSTree Monitor Staged Deployment. Jan 16 20:35:48 localhost systemd[1]: Started Daily rotation of log files. Jan 16 20:35:48 localhost systemd[1]: Started Daily Cleanup of Temporary Directories. Jan 16 20:35:48 localhost systemd[1]: Started daily update of the root trust anchor for DNSSEC. Jan 16 20:35:48 localhost systemd[1]: Reached target Path Units. Jan 16 20:35:48 localhost systemd[1]: Reached target Timer Units. Jan 16 20:35:48 localhost systemd[1]: Listening on bootupd.socket. Jan 16 20:35:48 localhost systemd[1]: Listening on D-Bus System Message Bus Socket. Jan 16 20:35:48 localhost systemd[1]: Listening on Open-iSCSI iscsid Socket. Jan 16 20:35:48 localhost systemd[1]: Listening on Open-iSCSI iscsiuio Socket. Jan 16 20:35:49 localhost systemd[1]: Listening on Journal Gateway Service Socket. Jan 16 20:35:49 localhost systemd[1]: Reached target Socket Units. Jan 16 20:35:49 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jan 16 20:35:49 localhost systemd[1]: Reached target Basic System. Jan 16 20:35:49 localhost systemd[1]: Starting Change ownership of journal-gatewayd.key... Jan 16 20:35:49 localhost systemd[1]: Starting CoreOS Generate iSCSI Initiator Name... Jan 16 20:35:49 localhost systemd[1]: CoreOS Delete Ignition Config From Hypervisor was skipped because no trigger condition checks were met. Jan 16 20:35:49 localhost systemd[1]: Starting CoreOS Mark Ignition Boot Complete... Jan 16 20:35:49 localhost systemd[1]: Starting Create Ignition Status Issue Files... Jan 16 20:35:49 localhost systemd[1]: Starting CoreOS Configure Chrony Based On The Platform... Jan 16 20:35:49 localhost systemd[1]: Starting Generation of shadow ID ranges for CRI-O... Jan 16 20:35:49 localhost kernel: EXT4-fs (vda3): re-mounted. Quota mode: none. Jan 16 20:35:49 localhost systemd[1]: Starting Restore /run/initramfs on shutdown... Jan 16 20:35:49 localhost systemd[1]: Starting Shared volume for ironic... Jan 16 20:35:49 localhost systemd[1]: Started irqbalance daemon. Jan 16 20:35:49 localhost systemd[1]: Software RAID monitoring and management was skipped because of an unmet condition check (ConditionPathExists=/etc/mdadm.conf). Jan 16 20:35:49 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload). Jan 16 20:35:49 localhost systemd[1]: OSTree Complete Boot was skipped because no trigger condition checks were met. Jan 16 20:35:49 localhost systemd[1]: Read-Only Sysroot Migration was skipped because of an unmet condition check (ConditionPathIsReadWrite=/sysroot). Jan 16 20:35:49 localhost systemd[1]: Started QEMU Guest Agent. Jan 16 20:35:49 localhost systemd[1]: Starting OpenSSH ecdsa Server Key Generation... Jan 16 20:35:49 localhost systemd[1]: Starting OpenSSH ed25519 Server Key Generation... Jan 16 20:35:49 localhost systemd[1]: Starting OpenSSH rsa Server Key Generation... Jan 16 20:35:49 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met. Jan 16 20:35:49 localhost systemd[1]: Reached target User and Group Name Lookups. Jan 16 20:35:49 localhost systemd[1]: Starting User Login Management... Jan 16 20:35:49 localhost systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jan 16 20:35:49 localhost systemd[1]: VGAuth Service for open-vm-tools was skipped because of an unmet condition check (ConditionVirtualization=vmware). Jan 16 20:35:49 localhost systemd[1]: Service for virtual machines hosted on VMware was skipped because of an unmet condition check (ConditionVirtualization=vmware). Jan 16 20:35:49 localhost systemd[1]: Finished CoreOS Generate iSCSI Initiator Name. Jan 16 20:35:49 localhost systemd[1]: Finished CoreOS Mark Ignition Boot Complete. Jan 16 20:35:49 localhost systemd[1]: Finished Restore /run/initramfs on shutdown. Jan 16 20:35:49 localhost systemd[1]: Starting Cleanup of Temporary Directories... Jan 16 20:35:49 localhost systemd[1]: Finished CoreOS Configure Chrony Based On The Platform. Jan 16 20:35:49 localhost systemd[1]: Starting Network Manager... Jan 16 20:35:49 localhost groupadd[1698]: group added to /etc/group: name=containers, GID=989 Jan 16 20:35:49 localhost systemd[1]: Starting NTP client/server... Jan 16 20:35:49 localhost groupadd[1698]: group added to /etc/gshadow: name=containers Jan 16 20:35:49 localhost systemd-logind[1687]: New seat seat0. Jan 16 20:35:49 localhost systemd-logind[1687]: Watching system buttons on /dev/input/event0 (Power Button) Jan 16 20:35:49 localhost systemd-logind[1687]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 16 20:35:49 localhost systemd[1]: Starting D-Bus System Message Bus... Jan 16 20:35:49 localhost dbus-broker-launch[1711]: Looking up NSS user entry for 'dbus'... Jan 16 20:35:49 localhost dbus-broker-launch[1711]: NSS returned NAME 'dbus' and UID '81' Jan 16 20:35:49 localhost dbus-broker-launch[1711]: Looking up NSS user entry for 'polkitd'... Jan 16 20:35:49 localhost dbus-broker-launch[1711]: NSS returned NAME 'polkitd' and UID '999' Jan 16 20:35:49 localhost systemd[1]: sshd-keygen@ed25519.service: Deactivated successfully. Jan 16 20:35:49 localhost systemd[1]: Finished OpenSSH ed25519 Server Key Generation. Jan 16 20:35:49 localhost systemd[1]: sshd-keygen@ecdsa.service: Deactivated successfully. Jan 16 20:35:49 localhost systemd[1]: Finished OpenSSH ecdsa Server Key Generation. Jan 16 20:35:49 localhost NetworkManager[1706]: [1705437349.6866] NetworkManager (version 1.42.2-8.el9_2) is starting... (boot:08825aa6-a3fb-4095-b0cb-b7310a6e5446) Jan 16 20:35:49 localhost NetworkManager[1706]: [1705437349.6872] Read config: /etc/NetworkManager/NetworkManager.conf (lib: 10-disable-default-plugins.conf, 20-client-id-from-mac.conf) (etc: 99-baremetal.conf) Jan 16 20:35:49 localhost groupadd[1698]: new group: name=containers, GID=989 Jan 16 20:35:49 localhost systemd[1]: Started D-Bus System Message Bus. Jan 16 20:35:49 localhost chronyd[1728]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Jan 16 20:35:49 localhost dbus-broker-lau[1711]: Ready Jan 16 20:35:49 localhost NetworkManager[1706]: [1705437349.7918] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Jan 16 20:35:49 localhost systemd[1]: Started User Login Management. Jan 16 20:35:49 localhost systemd-tmpfiles[1697]: /usr/lib/tmpfiles.d/tmp.conf:12: Duplicate line for path "/var/tmp", ignoring. Jan 16 20:35:49 localhost useradd[1717]: new group: name=systemd-journal-gateway, GID=988 Jan 16 20:35:49 localhost useradd[1717]: new user: name=systemd-journal-gateway, UID=989, GID=988, home=/var/home/systemd-journal-gateway, shell=/bin/bash, from=none Jan 16 20:35:49 localhost chronyd[1728]: Using right/UTC timezone to obtain leap second data Jan 16 20:35:49 localhost chronyd[1728]: Loaded seccomp filter (level 2) Jan 16 20:35:49 localhost systemd[1]: Started NTP client/server. Jan 16 20:35:49 localhost NetworkManager[1706]: [1705437349.8403] manager[0x558cb5297020]: monitoring kernel firmware directory '/lib/firmware'. Jan 16 20:35:49 localhost systemd[1]: Started Network Manager. Jan 16 20:35:49 localhost systemd[1]: Finished Create Ignition Status Issue Files. Jan 16 20:35:49 localhost systemd-tmpfiles[1697]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 16 20:35:49 localhost systemd-tmpfiles[1697]: /usr/lib/tmpfiles.d/var.conf:19: Duplicate line for path "/var/cache", ignoring. Jan 16 20:35:49 localhost systemd-tmpfiles[1697]: /usr/lib/tmpfiles.d/var.conf:21: Duplicate line for path "/var/lib", ignoring. Jan 16 20:35:49 localhost systemd-tmpfiles[1697]: /usr/lib/tmpfiles.d/var.conf:23: Duplicate line for path "/var/spool", ignoring. Jan 16 20:35:49 localhost systemd[1]: Reached target Network. Jan 16 20:35:49 localhost systemd[1]: Starting Network Manager Wait Online... Jan 16 20:35:50 localhost systemd[1]: Update GCP routes for forwarded IPs. was skipped because no trigger condition checks were met. Jan 16 20:35:50 localhost systemd[1]: Starting Apply nmstate on-disk state... Jan 16 20:35:50 localhost systemd[1]: Starting Hostname Service... Jan 16 20:35:50 localhost systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Jan 16 20:35:50 localhost systemd[1]: Finished Cleanup of Temporary Directories. Jan 16 20:35:50 localhost nmstatectl[1740]: [2024-01-16T20:35:50Z INFO nmstatectl::service] No nmstate config(end with .yml) found in config folder /etc/nmstate Jan 16 20:35:50 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Jan 16 20:35:50 localhost systemd[1]: Finished Apply nmstate on-disk state. Jan 16 20:35:50 localhost useradd[1734]: new user: name=containers, UID=988, GID=989, home=/var/home/containers, shell=/sbin/nologin, from=none Jan 16 20:35:50 localhost ironic-volume[1664]: systemd-ironic Jan 16 20:35:50 localhost systemd[1]: Finished Shared volume for ironic. Jan 16 20:35:50 localhost systemd[1]: Finished Change ownership of journal-gatewayd.key. Jan 16 20:35:50 localhost systemd[1]: Started Hostname Service. Jan 16 20:35:50 localhost NetworkManager[1706]: [1705437350.5112] hostname: hostname: using hostnamed Jan 16 20:35:50 localhost systemd[1]: crio-subid.service: Deactivated successfully. Jan 16 20:35:50 localhost systemd[1]: Finished Generation of shadow ID ranges for CRI-O. Jan 16 20:35:50 localhost NetworkManager[1706]: [1705437350.5154] dns-mgr: init: dns=default,systemd-resolved rc-manager=unmanaged Jan 16 20:35:50 localhost NetworkManager[1706]: [1705437350.5183] policy: set-hostname: set hostname to 'localhost.localdomain' (no hostname found) Jan 16 20:35:50 localhost.localdomain systemd-hostnamed[1747]: Hostname set to (transient) Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.5748] manager[0x558cb5297020]: rfkill: Wi-Fi hardware radio set enabled Jan 16 20:35:50 localhost.localdomain systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch. Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.5753] manager[0x558cb5297020]: rfkill: WWAN hardware radio set enabled Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.6051] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.42.2-8.el9_2/libnm-device-plugin-ovs.so) Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.6106] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-8.el9_2/libnm-device-plugin-team.so) Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.6113] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.6133] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.6142] manager: Networking is enabled by state file Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.6166] settings: Loaded settings plugin: keyfile (internal) Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.6222] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-8.el9_2/libnm-settings-plugin-ifcfg-rh.so") Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.6248] keyfile: load: "/etc/NetworkManager/system-connections/nmconnection": failed to load connection: invalid connection: connection.type: property is missing Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.6310] dhcp: init: Using DHCP client 'internal' Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.6326] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) Jan 16 20:35:50 localhost.localdomain systemd[1]: Starting Network Manager Script Dispatcher Service... Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.6431] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.6606] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.6693] device (lo): Activation: starting connection 'lo' (6c867095-7af2-44a0-8bac-40c7bcfa2f70) Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.6747] manager: (ens3): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.6822] settings: (ens3): created default wired connection 'Wired connection 1' Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.6828] device (ens3): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Jan 16 20:35:50 localhost.localdomain systemd[1]: Started Network Manager Script Dispatcher Service. Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.7195] manager: (ens4): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3) Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.7275] settings: (ens4): created default wired connection 'Wired connection 2' Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.7285] device (ens4): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.7507] ovsdb: disconnected from ovsdb Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.7518] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.7532] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.7546] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.7566] device (ens3): carrier: link connected Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.7580] device (ens4): carrier: link connected Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.7718] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.7741] device (ens3): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.7778] device (ens4): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.7816] policy: auto-activating connection 'Wired connection 1' (40df3327-4a42-334b-87f2-a80c0901d236) Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.7826] policy: auto-activating connection 'Wired connection 2' (01a00a81-2a9f-354e-9949-a4ae9b243f35) Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.7836] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.7853] device (ens3): Activation: starting connection 'Wired connection 1' (40df3327-4a42-334b-87f2-a80c0901d236) Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.7865] device (ens4): Activation: starting connection 'Wired connection 2' (01a00a81-2a9f-354e-9949-a4ae9b243f35) Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.7868] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.7898] device (lo): Activation: successful, device activated. Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.7926] device (ens3): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.7941] manager: NetworkManager state is now CONNECTING Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.7952] device (ens3): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.7983] device (ens4): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.7998] device (ens4): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.8026] device (ens3): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.8064] dhcp4 (ens3): activation: beginning transaction (timeout in 45 seconds) Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.8086] device (ens4): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 16 20:35:50 localhost.localdomain NetworkManager[1706]: [1705437350.8118] dhcp4 (ens4): activation: beginning transaction (timeout in 45 seconds) Jan 16 20:35:50 localhost.localdomain root[1826]: NM local-dns-prepender triggered by lo up. Jan 16 20:35:50 localhost.localdomain nm-dispatcher[1826]: <13>Jan 16 20:35:50 root: NM local-dns-prepender triggered by lo up. Jan 16 20:35:51 localhost.localdomain nm-dispatcher[1829]: Failed to get unit file state for systemd-resolved.service: No such file or directory Jan 16 20:35:51 localhost.localdomain root[1830]: NM local-dns-prepender: Checking if local DNS IP is the first entry in resolv.conf Jan 16 20:35:51 localhost.localdomain nm-dispatcher[1830]: <13>Jan 16 20:35:51 root: NM local-dns-prepender: Checking if local DNS IP is the first entry in resolv.conf Jan 16 20:35:51 localhost.localdomain nm-dispatcher[1831]: grep: /etc/resolv.conf: No such file or directory Jan 16 20:35:51 localhost.localdomain root[1834]: NM local-dns-prepender: Looking for '# Generated by NetworkManager' in /etc/resolv.conf to place 'nameserver 127.0.0.1' Jan 16 20:35:51 localhost.localdomain nm-dispatcher[1834]: <13>Jan 16 20:35:51 root: NM local-dns-prepender: Looking for '# Generated by NetworkManager' in /etc/resolv.conf to place 'nameserver 127.0.0.1' Jan 16 20:35:51 localhost.localdomain nm-dispatcher[1835]: cp: cannot stat '/var/run/NetworkManager/resolv.conf': No such file or directory Jan 16 20:35:51 localhost.localdomain nm-dispatcher[1836]: sed: can't read /etc/resolv.tmp: No such file or directory Jan 16 20:35:51 localhost.localdomain nm-dispatcher[1837]: mv: cannot stat '/etc/resolv.tmp': No such file or directory Jan 16 20:35:51 localhost.localdomain nm-dispatcher[1778]: req:5 'up' [lo], "/etc/NetworkManager/dispatcher.d/30-local-dns-prepender": complete: failed with Script '/etc/NetworkManager/dispatcher.d/30-local-dns-prepender' exited with status 1. Jan 16 20:35:51 localhost.localdomain NetworkManager[1706]: [1705437351.0672] dispatcher: (5) /etc/NetworkManager/dispatcher.d/30-local-dns-prepender failed (failed): Script '/etc/NetworkManager/dispatcher.d/30-local-dns-prepender' exited with status 1. Jan 16 20:35:51 localhost.localdomain systemd[1]: Finished Run update-ca-trust. Jan 16 20:35:51 localhost.localdomain systemd[1]: Reached target First Boot Complete. Jan 16 20:35:51 localhost.localdomain systemd[1]: Starting Commit a transient machine-id on disk... Jan 16 20:35:51 localhost.localdomain systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 16 20:35:51 localhost.localdomain systemd[1]: Finished Commit a transient machine-id on disk. Jan 16 20:35:51 localhost.localdomain NetworkManager[1706]: [1705437351.8244] dhcp4 (ens3): state changed new lease, address=10.0.0.70 Jan 16 20:35:51 localhost.localdomain NetworkManager[1706]: [1705437351.8255] policy: set 'Wired connection 1' (ens3) as default for IPv4 routing and DNS Jan 16 20:35:51 localhost.localdomain systemd[1]: sshd-keygen@rsa.service: Deactivated successfully. Jan 16 20:35:51 localhost.localdomain systemd[1]: Finished OpenSSH rsa Server Key Generation. Jan 16 20:35:51 localhost.localdomain NetworkManager[1706]: [1705437351.8504] device (ens3): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Jan 16 20:35:51 localhost.localdomain systemd[1]: sshd-keygen@rsa.service: Consumed 1.753s CPU time. Jan 16 20:35:51 localhost.localdomain systemd[1]: Reached target sshd-keygen.target. Jan 16 20:35:51 localhost.localdomain NetworkManager[1706]: [1705437351.8698] device (ens3): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Jan 16 20:35:51 localhost.localdomain NetworkManager[1706]: [1705437351.8706] device (ens3): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Jan 16 20:35:51 localhost.localdomain NetworkManager[1706]: [1705437351.8729] device (ens3): Activation: successful, device activated. Jan 16 20:35:51 localhost.localdomain NetworkManager[1706]: [1705437351.8758] manager: NetworkManager state is now CONNECTED_GLOBAL Jan 16 20:35:51 localhost.localdomain systemd[1]: Starting Generate SSH keys snippet for display via console-login-helper-messages... Jan 16 20:35:51 localhost.localdomain systemd[1]: Starting OpenSSH server daemon... Jan 16 20:35:51 localhost.localdomain sshd[1857]: main: sshd: ssh-rsa algorithm is disabled Jan 16 20:35:52 localhost.localdomain sshd[1857]: Server listening on 0.0.0.0 port 22. Jan 16 20:35:52 localhost.localdomain sshd[1857]: Server listening on :: port 22. Jan 16 20:35:52 localhost.localdomain systemd[1]: Started OpenSSH server daemon. Jan 16 20:35:52 localhost.localdomain systemd[1]: Finished Generate SSH keys snippet for display via console-login-helper-messages. Jan 16 20:35:52 localhost.localdomain systemd[1]: Starting Permit User Sessions... Jan 16 20:35:52 localhost.localdomain root[1881]: NM local-dns-prepender triggered by ens3 dhcp4-change. Jan 16 20:35:52 localhost.localdomain nm-dispatcher[1881]: <13>Jan 16 20:35:52 root: NM local-dns-prepender triggered by ens3 dhcp4-change. Jan 16 20:35:52 localhost.localdomain nm-dispatcher[1883]: NM resolv-prepender: Checking for nameservers in /var/run/NetworkManager/resolv.conf Jan 16 20:35:52 localhost.localdomain nm-dispatcher[1884]: nameserver 10.0.0.254 Jan 16 20:35:52 localhost.localdomain systemd[1]: Finished Permit User Sessions. Jan 16 20:35:52 localhost.localdomain systemd[1]: CoreOS Live ISO virtio success was skipped because of an unmet condition check (ConditionPathExists=/dev/virtio-ports/coreos.liveiso-success). Jan 16 20:35:52 localhost.localdomain systemd[1]: Started Getty on tty1. Jan 16 20:35:52 localhost.localdomain systemd[1]: Started Serial Getty on ttyS0. Jan 16 20:35:52 localhost.localdomain systemd[1]: Reached target Login Prompts. Jan 16 20:35:52 localhost.localdomain nm-dispatcher[1885]: Failed to get unit file state for systemd-resolved.service: No such file or directory Jan 16 20:35:52 localhost.localdomain root[1888]: NM local-dns-prepender: Checking if local DNS IP is the first entry in resolv.conf Jan 16 20:35:52 localhost.localdomain nm-dispatcher[1888]: <13>Jan 16 20:35:52 root: NM local-dns-prepender: Checking if local DNS IP is the first entry in resolv.conf Jan 16 20:35:52 localhost.localdomain nm-dispatcher[1889]: grep: /etc/resolv.conf: No such file or directory Jan 16 20:35:52 localhost.localdomain root[1892]: NM local-dns-prepender: Looking for '# Generated by NetworkManager' in /etc/resolv.conf to place 'nameserver 127.0.0.1' Jan 16 20:35:52 localhost.localdomain nm-dispatcher[1892]: <13>Jan 16 20:35:52 root: NM local-dns-prepender: Looking for '# Generated by NetworkManager' in /etc/resolv.conf to place 'nameserver 127.0.0.1' Jan 16 20:35:52 localhost.localdomain root[1909]: NM local-dns-prepender triggered by ens3 up. Jan 16 20:35:52 localhost.localdomain nm-dispatcher[1909]: <13>Jan 16 20:35:52 root: NM local-dns-prepender triggered by ens3 up. Jan 16 20:35:52 localhost.localdomain nm-dispatcher[1912]: Failed to get unit file state for systemd-resolved.service: No such file or directory Jan 16 20:35:52 localhost.localdomain root[1913]: NM local-dns-prepender: Checking if local DNS IP is the first entry in resolv.conf Jan 16 20:35:52 localhost.localdomain nm-dispatcher[1913]: <13>Jan 16 20:35:52 root: NM local-dns-prepender: Checking if local DNS IP is the first entry in resolv.conf Jan 16 20:35:52 localhost.localdomain root[1917]: NM local-dns-prepender: local DNS IP already is the first entry in resolv.conf Jan 16 20:35:52 localhost.localdomain nm-dispatcher[1917]: <13>Jan 16 20:35:52 root: NM local-dns-prepender: local DNS IP already is the first entry in resolv.conf Jan 16 20:35:57 localhost.localdomain chronyd[1728]: Selected source 152.70.69.232 (2.rhel.pool.ntp.org) Jan 16 20:35:57 localhost.localdomain chronyd[1728]: System clock TAI offset set to 37 seconds Jan 16 20:36:02 localhost.localdomain systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Jan 16 20:36:19 localhost.localdomain sshd[1940]: main: sshd: ssh-rsa algorithm is disabled Jan 16 20:36:20 localhost.localdomain systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 16 20:36:22 localhost.localdomain sshd[1940]: Accepted publickey for core from 10.0.0.253 port 44546 ssh2: ED25519 SHA256:p7ow9cnTIwIrmdA7axe7UNqqNGs9HwXLz6dgeS12JKA Jan 16 20:36:22 localhost.localdomain systemd-logind[1687]: New session 1 of user core. Jan 16 20:36:22 localhost.localdomain systemd[1]: Created slice User Slice of UID 1000. Jan 16 20:36:22 localhost.localdomain systemd[1]: Starting User Runtime Directory /run/user/1000... Jan 16 20:36:22 localhost.localdomain systemd[1]: Finished User Runtime Directory /run/user/1000. Jan 16 20:36:22 localhost.localdomain systemd[1]: Starting User Manager for UID 1000... Jan 16 20:36:22 localhost.localdomain systemd[1948]: pam_unix(systemd-user:session): session opened for user core(uid=1000) by (uid=0) Jan 16 20:36:22 localhost.localdomain systemd[1948]: Queued start job for default target Main User Target. Jan 16 20:36:22 localhost.localdomain systemd[1948]: Created slice User Application Slice. Jan 16 20:36:22 localhost.localdomain systemd[1948]: Started Daily Cleanup of User's Temporary Directories. Jan 16 20:36:22 localhost.localdomain systemd[1948]: Reached target Paths. Jan 16 20:36:22 localhost.localdomain systemd[1948]: Reached target Timers. Jan 16 20:36:22 localhost.localdomain systemd[1948]: Starting D-Bus User Message Bus Socket... Jan 16 20:36:22 localhost.localdomain systemd[1948]: Starting Create User's Volatile Files and Directories... Jan 16 20:36:22 localhost.localdomain systemd[1948]: Listening on D-Bus User Message Bus Socket. Jan 16 20:36:22 localhost.localdomain systemd[1948]: Finished Create User's Volatile Files and Directories. Jan 16 20:36:22 localhost.localdomain systemd[1948]: Reached target Sockets. Jan 16 20:36:22 localhost.localdomain systemd[1948]: Reached target Basic System. Jan 16 20:36:22 localhost.localdomain systemd[1948]: Reached target Main User Target. Jan 16 20:36:22 localhost.localdomain systemd[1948]: Startup finished in 333ms. Jan 16 20:36:22 localhost.localdomain systemd[1]: Started User Manager for UID 1000. Jan 16 20:36:22 localhost.localdomain systemd[1]: Started Session 1 of User core. Jan 16 20:36:23 localhost.localdomain sshd[1940]: pam_unix(sshd:session): session opened for user core(uid=1000) by (uid=0) Jan 16 20:36:23 localhost.localdomain systemd[1]: Starting Hostname Service... Jan 16 20:36:23 localhost.localdomain systemd[1]: Started Hostname Service. Jan 16 20:36:36 localhost.localdomain NetworkManager[1706]: [1705437396.2770] device (ens4): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 16 20:36:36 localhost.localdomain NetworkManager[1706]: [1705437396.2797] device (ens4): Activation: failed for connection 'Wired connection 2' Jan 16 20:36:36 localhost.localdomain NetworkManager[1706]: [1705437396.2807] device (ens4): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 16 20:36:36 localhost.localdomain NetworkManager[1706]: [1705437396.2961] dhcp4 (ens4): canceled DHCP transaction Jan 16 20:36:36 localhost.localdomain NetworkManager[1706]: [1705437396.2962] dhcp4 (ens4): activation: beginning transaction (timeout in 45 seconds) Jan 16 20:36:36 localhost.localdomain NetworkManager[1706]: [1705437396.2963] dhcp4 (ens4): state changed no lease Jan 16 20:36:36 localhost.localdomain NetworkManager[1706]: [1705437396.3014] policy: set-hostname: set hostname to 'localhost.localdomain' (no hostname found) Jan 16 20:36:36 localhost.localdomain NetworkManager[1706]: [1705437396.3033] policy: auto-activating connection 'Wired connection 2' (01a00a81-2a9f-354e-9949-a4ae9b243f35) Jan 16 20:36:36 localhost.localdomain NetworkManager[1706]: [1705437396.3054] device (ens4): Activation: starting connection 'Wired connection 2' (01a00a81-2a9f-354e-9949-a4ae9b243f35) Jan 16 20:36:36 localhost.localdomain NetworkManager[1706]: [1705437396.3058] device (ens4): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 16 20:36:36 localhost.localdomain NetworkManager[1706]: [1705437396.3071] device (ens4): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 16 20:36:36 localhost.localdomain NetworkManager[1706]: [1705437396.3103] device (ens4): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 16 20:36:36 localhost.localdomain NetworkManager[1706]: [1705437396.3148] dhcp4 (ens4): activation: beginning transaction (timeout in 45 seconds) Jan 16 20:36:36 localhost.localdomain systemd[1]: Starting Network Manager Script Dispatcher Service... Jan 16 20:36:36 localhost.localdomain systemd[1]: Started Network Manager Script Dispatcher Service. Jan 16 20:36:46 localhost.localdomain systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Jan 16 20:36:50 localhost.localdomain systemd[1]: NetworkManager-wait-online.service: Main process exited, code=exited, status=1/FAILURE Jan 16 20:36:50 localhost.localdomain systemd[1]: NetworkManager-wait-online.service: Failed with result 'exit-code'. Jan 16 20:36:50 localhost.localdomain systemd[1]: Failed to start Network Manager Wait Online. Jan 16 20:36:50 localhost.localdomain systemd[1]: Reached target Network is Online. Jan 16 20:36:50 localhost.localdomain systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive. Jan 16 20:36:50 localhost.localdomain systemd[1]: Starting Download the OpenShift Release Image... Jan 16 20:36:50 localhost.localdomain systemd[1]: Starting Notify NFS peers of a restart... Jan 16 20:36:50 localhost.localdomain systemd[1]: Starting NFS status monitor for NFSv2/3 locking.... Jan 16 20:36:50 localhost.localdomain sm-notify[2008]: Version 2.5.4 starting Jan 16 20:36:50 localhost.localdomain systemd[1]: Starting Wait for iptables to be initialised... Jan 16 20:36:50 localhost.localdomain systemd[1]: Started Notify NFS peers of a restart. Jan 16 20:36:50 localhost.localdomain systemd[1]: Starting RPC Bind... Jan 16 20:36:50 localhost.localdomain systemd[1]: Started RPC Bind. Jan 16 20:36:50 localhost.localdomain rpc.statd[2016]: Version 2.5.4 starting Jan 16 20:36:50 localhost.localdomain rpc.statd[2016]: Flags: TI-RPC Jan 16 20:36:50 localhost.localdomain rpc.statd[2016]: Initializing NSM state Jan 16 20:36:50 localhost.localdomain iptables[2010]: Chain INPUT (policy ACCEPT) Jan 16 20:36:50 localhost.localdomain iptables[2010]: target prot opt source destination Jan 16 20:36:50 localhost.localdomain iptables[2010]: Chain FORWARD (policy ACCEPT) Jan 16 20:36:50 localhost.localdomain iptables[2010]: target prot opt source destination Jan 16 20:36:50 localhost.localdomain iptables[2010]: Chain OUTPUT (policy ACCEPT) Jan 16 20:36:50 localhost.localdomain iptables[2010]: target prot opt source destination Jan 16 20:36:50 localhost.localdomain systemd[1]: Finished Wait for iptables to be initialised. Jan 16 20:36:50 localhost.localdomain systemd[1]: Started NFS status monitor for NFSv2/3 locking.. Jan 16 20:36:50 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:36:50 localhost.localdomain release-image-download.sh[2007]: Pulling quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa... Jan 16 20:36:59 localhost.localdomain sshd[2050]: main: sshd: ssh-rsa algorithm is disabled Jan 16 20:36:59 localhost.localdomain sshd[2050]: Accepted publickey for core from 10.0.0.253 port 46444 ssh2: ED25519 SHA256:p7ow9cnTIwIrmdA7axe7UNqqNGs9HwXLz6dgeS12JKA Jan 16 20:36:59 localhost.localdomain systemd-logind[1687]: New session 3 of user core. Jan 16 20:36:59 localhost.localdomain systemd[1]: Started Session 3 of User core. Jan 16 20:36:59 localhost.localdomain sshd[2050]: pam_unix(sshd:session): session opened for user core(uid=1000) by (uid=0) Jan 16 20:37:04 localhost.localdomain chronyd[1728]: Selected source 103.146.168.7 (2.rhel.pool.ntp.org) Jan 16 20:37:06 localhost.localdomain NetworkManager[1706]: [1705437426.3278] policy: set-hostname: set hostname to 'localhost.localdomain' (no hostname found) Jan 16 20:37:06 localhost.localdomain systemd[1]: Starting Network Manager Script Dispatcher Service... Jan 16 20:37:06 localhost.localdomain systemd[1]: Started Network Manager Script Dispatcher Service. Jan 16 20:37:12 localhost.localdomain kernel: VFS: idmapped mount is not enabled. Jan 16 20:37:16 localhost.localdomain systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Jan 16 20:37:21 localhost.localdomain NetworkManager[1706]: [1705437441.2764] device (ens4): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 16 20:37:21 localhost.localdomain NetworkManager[1706]: [1705437441.2778] device (ens4): Activation: failed for connection 'Wired connection 2' Jan 16 20:37:21 localhost.localdomain NetworkManager[1706]: [1705437441.2784] device (ens4): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 16 20:37:21 localhost.localdomain NetworkManager[1706]: [1705437441.2811] dhcp4 (ens4): canceled DHCP transaction Jan 16 20:37:21 localhost.localdomain NetworkManager[1706]: [1705437441.2812] dhcp4 (ens4): activation: beginning transaction (timeout in 45 seconds) Jan 16 20:37:21 localhost.localdomain NetworkManager[1706]: [1705437441.2812] dhcp4 (ens4): state changed no lease Jan 16 20:37:21 localhost.localdomain NetworkManager[1706]: [1705437441.2836] policy: set-hostname: set hostname to 'localhost.localdomain' (no hostname found) Jan 16 20:37:21 localhost.localdomain NetworkManager[1706]: [1705437441.2845] policy: auto-activating connection 'Wired connection 2' (01a00a81-2a9f-354e-9949-a4ae9b243f35) Jan 16 20:37:21 localhost.localdomain NetworkManager[1706]: [1705437441.2854] device (ens4): Activation: starting connection 'Wired connection 2' (01a00a81-2a9f-354e-9949-a4ae9b243f35) Jan 16 20:37:21 localhost.localdomain NetworkManager[1706]: [1705437441.2857] device (ens4): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 16 20:37:21 localhost.localdomain NetworkManager[1706]: [1705437441.2871] device (ens4): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 16 20:37:21 localhost.localdomain NetworkManager[1706]: [1705437441.2891] device (ens4): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 16 20:37:21 localhost.localdomain NetworkManager[1706]: [1705437441.2926] dhcp4 (ens4): activation: beginning transaction (timeout in 45 seconds) Jan 16 20:37:21 localhost.localdomain systemd[1]: Starting Network Manager Script Dispatcher Service... Jan 16 20:37:21 localhost.localdomain systemd[1]: Started Network Manager Script Dispatcher Service. Jan 16 20:37:31 localhost.localdomain systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Jan 16 20:37:32 localhost.localdomain release-image-download.sh[2038]: 40e15091a793905eb63a02d951105fc5c5904bfb294f8004c052ac950c9ac44a Jan 16 20:37:32 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:37:33 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:37:33 localhost.localdomain systemd[1]: Finished Download the OpenShift Release Image. Jan 16 20:37:33 localhost.localdomain systemd[1]: Starting Configure CRI-O to use the pause image... Jan 16 20:37:34 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:37:34 localhost.localdomain systemd[1]: Created slice Slice /machine. Jan 16 20:37:34 localhost.localdomain systemd[1]: Started libcontainer container 44f736cf976b4d03f6931c9476eff17e341375160cc34bde2c051f22f94681e5. Jan 16 20:37:35 localhost.localdomain systemd[1]: libpod-44f736cf976b4d03f6931c9476eff17e341375160cc34bde2c051f22f94681e5.scope: Deactivated successfully. Jan 16 20:37:35 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-44f736cf976b4d03f6931c9476eff17e341375160cc34bde2c051f22f94681e5-userdata-shm.mount: Deactivated successfully. Jan 16 20:37:35 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-4ca1ce0d98706663226215f56c70e7fd0cc7d1f8fa2301e9d082a8c8509e2eb5-merged.mount: Deactivated successfully. Jan 16 20:37:35 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:37:35 localhost.localdomain systemd[1]: Finished Configure CRI-O to use the pause image. Jan 16 20:37:35 localhost.localdomain systemd[1]: Starting Container Runtime Interface for OCI (CRI-O)... Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.445452627Z" level=info msg="Starting CRI-O, version: 1.27.1-8.1.rhaos4.14.git3fecb83.el9, git: unknown(clean)" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.475383721Z" level=info msg="Node configuration value for hugetlb cgroup is true" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.475736155Z" level=info msg="Node configuration value for pid cgroup is true" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.476508962Z" level=info msg="Node configuration value for memoryswap cgroup is true" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.476790188Z" level=info msg="Node configuration value for cgroup v2 is true" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.515396088Z" level=info msg="Node configuration value for systemd CollectMode is true" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.551652686Z" level=info msg="Node configuration value for systemd AllowedCPUs is true" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.560329556Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.568233404Z" level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.585465306Z" level=info msg="Checkpoint/restore support disabled" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.585638230Z" level=info msg="Using seccomp default profile when unspecified: true" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.585681694Z" level=info msg="Using the internal default seccomp profile" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.585710032Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.585737843Z" level=info msg="No blockio config file specified, blockio not configured" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.585765805Z" level=info msg="RDT not available in the host system" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.601234724Z" level=info msg="Conmon does support the --sync option" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.601418648Z" level=info msg="Conmon does support the --log-global-size-max option" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.612349639Z" level=info msg="Conmon does support the --sync option" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.612611342Z" level=info msg="Conmon does support the --log-global-size-max option" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.641580724Z" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conflist" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.659629387Z" level=info msg="Found CNI network loopback (type=loopback) at /etc/cni/net.d/200-loopback.conflist" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.660322805Z" level=info msg="Updated default CNI network name to crio" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.668538620Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.671909165Z" level=info msg="Creating banned CPU list file \"/etc/sysconfig/orig_irq_banned_cpus\"" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.672726743Z" level=info msg="Restore irqbalance config: created backup file" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.697461253Z" level=warning msg="Error encountered when checking whether cri-o should wipe containers: open /var/run/crio/version: no such file or directory" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.705809134Z" level=info msg="Starting seccomp notifier watcher" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.706368683Z" level=info msg="Create NRI interface" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.706416756Z" level=info msg="NRI interface is disabled in the configuration." Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.707431299Z" level=error msg="Writing clean shutdown supported file: open /var/lib/crio/clean.shutdown.supported: no such file or directory" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.707612664Z" level=error msg="Failed to sync parent directory of clean shutdown file: open /var/lib/crio: no such file or directory" Jan 16 20:37:36 localhost.localdomain crio[2304]: time="2024-01-16 20:37:36.707689383Z" level=info msg="Serving metrics on :9537 via HTTP" Jan 16 20:37:36 localhost.localdomain systemd[1]: Started Container Runtime Interface for OCI (CRI-O). Jan 16 20:37:36 localhost.localdomain systemd[1]: Starting Build Ironic environment... Jan 16 20:37:36 localhost.localdomain systemd[1]: Starting Kubernetes Kubelet... Jan 16 20:37:36 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:37:37 localhost.localdomain build-ironic-env.sh[2352]: PROVISIONING_INTERFACE="ens4" Jan 16 20:37:37 localhost.localdomain build-ironic-env.sh[2352]: IRONIC_BASE_URL="http://10.0.0.50" Jan 16 20:37:37 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:37:37 localhost.localdomain systemd[1]: Started libcontainer container 743a2fabcc9bdcefa5e757483ca12acf2b05af0d0ecb7f8bac46111ab05a5a4c. Jan 16 20:37:37 localhost.localdomain systemd[1]: Started libcontainer container 2cacdb2d370f09ab8e1ca04af44fc6a27c3f0d182692bf485d936c923c1256dd. Jan 16 20:37:37 localhost.localdomain systemd[1]: libpod-743a2fabcc9bdcefa5e757483ca12acf2b05af0d0ecb7f8bac46111ab05a5a4c.scope: Deactivated successfully. Jan 16 20:37:38 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-743a2fabcc9bdcefa5e757483ca12acf2b05af0d0ecb7f8bac46111ab05a5a4c-userdata-shm.mount: Deactivated successfully. Jan 16 20:37:38 localhost.localdomain build-ironic-env.sh[2352]: IRONIC_IMAGE="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:816ae46ba01e0135faad92068891d99e57e2817dd9f48128dd45fff7d0defde4" Jan 16 20:37:38 localhost.localdomain systemd[1]: libpod-2cacdb2d370f09ab8e1ca04af44fc6a27c3f0d182692bf485d936c923c1256dd.scope: Deactivated successfully. Jan 16 20:37:38 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-9d13be76c39ee445bb09c76f048c9cc8c1d8e623fdba8996f930f54de6cd0b11-merged.mount: Deactivated successfully. Jan 16 20:37:38 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2cacdb2d370f09ab8e1ca04af44fc6a27c3f0d182692bf485d936c923c1256dd-userdata-shm.mount: Deactivated successfully. Jan 16 20:37:38 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-20a2b746bcc4e1af9cc1dd2881fec9054d3c6deee73b607486ab355f6579a832-merged.mount: Deactivated successfully. Jan 16 20:37:38 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:37:38 localhost.localdomain systemd[1]: Started libcontainer container 2a24d039a0d79cc0e07271319371b5f0fc486b00e3e4f11d5bc27c2820fd9b06. Jan 16 20:37:39 localhost.localdomain systemd[1]: libpod-2a24d039a0d79cc0e07271319371b5f0fc486b00e3e4f11d5bc27c2820fd9b06.scope: Deactivated successfully. Jan 16 20:37:39 localhost.localdomain build-ironic-env.sh[2352]: IRONIC_AGENT_IMAGE="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:237e5d9e16bcfb4808482a9c8035cd8a14a3f983bce3d20c024cd9d2423dbca1" Jan 16 20:37:39 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-7a308ed5f6dd1f13ff90a3f4f74e0afcf4baf4c523e10ff3814c5af2ff5282cb-merged.mount: Deactivated successfully. Jan 16 20:37:39 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:37:39 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a24d039a0d79cc0e07271319371b5f0fc486b00e3e4f11d5bc27c2820fd9b06-userdata-shm.mount: Deactivated successfully. Jan 16 20:37:40 localhost.localdomain systemd[1]: Started libcontainer container 15c3d90ffb8e4a15a99bb5d684a7a495861cc4dfa1531c13457471707ba7efe6. Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: Flag --anonymous-auth has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: Flag --serialize-image-pulls has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.277708 2579 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.280074 2579 flags.go:64] FLAG: --address="0.0.0.0" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.280122 2579 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.280151 2579 flags.go:64] FLAG: --anonymous-auth="false" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.280173 2579 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.280197 2579 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.280336 2579 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.280390 2579 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.280417 2579 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.280436 2579 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.280497 2579 flags.go:64] FLAG: --azure-container-registry-config="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.280528 2579 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.280553 2579 flags.go:64] FLAG: --bootstrap-kubeconfig="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.280573 2579 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.280592 2579 flags.go:64] FLAG: --cgroup-driver="systemd" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.280610 2579 flags.go:64] FLAG: --cgroup-root="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.280627 2579 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.280645 2579 flags.go:64] FLAG: --client-ca-file="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.280662 2579 flags.go:64] FLAG: --cloud-config="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.280679 2579 flags.go:64] FLAG: --cloud-provider="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.280696 2579 flags.go:64] FLAG: --cluster-dns="[]" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.280728 2579 flags.go:64] FLAG: --cluster-domain="cluster.local" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.280746 2579 flags.go:64] FLAG: --config="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.280763 2579 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.280782 2579 flags.go:64] FLAG: --container-log-max-files="5" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.280836 2579 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.280856 2579 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.280874 2579 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.280899 2579 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.280918 2579 flags.go:64] FLAG: --contention-profiling="false" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.281106 2579 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.281125 2579 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.281145 2579 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.281163 2579 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.281334 2579 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.281379 2579 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.281420 2579 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.281447 2579 flags.go:64] FLAG: --enable-load-reader="false" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.281466 2579 flags.go:64] FLAG: --enable-server="true" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.281483 2579 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.281507 2579 flags.go:64] FLAG: --event-burst="100" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.281528 2579 flags.go:64] FLAG: --event-qps="50" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.281547 2579 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.281565 2579 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.281588 2579 flags.go:64] FLAG: --eviction-hard="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.281612 2579 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.281630 2579 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.281649 2579 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.281668 2579 flags.go:64] FLAG: --eviction-soft="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.281688 2579 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.281705 2579 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.281723 2579 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.281741 2579 flags.go:64] FLAG: --experimental-mounter-path="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.281758 2579 flags.go:64] FLAG: --fail-swap-on="true" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.281776 2579 flags.go:64] FLAG: --feature-gates="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.281797 2579 flags.go:64] FLAG: --file-check-frequency="20s" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.281815 2579 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.281833 2579 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.281852 2579 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.282308 2579 flags.go:64] FLAG: --healthz-port="10248" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.282351 2579 flags.go:64] FLAG: --help="false" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.282371 2579 flags.go:64] FLAG: --hostname-override="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.282404 2579 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.282445 2579 flags.go:64] FLAG: --http-check-frequency="20s" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.282479 2579 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.282499 2579 flags.go:64] FLAG: --image-credential-provider-config="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.282517 2579 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.282535 2579 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.282554 2579 flags.go:64] FLAG: --image-service-endpoint="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.282571 2579 flags.go:64] FLAG: --iptables-drop-bit="15" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.282588 2579 flags.go:64] FLAG: --iptables-masquerade-bit="14" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.282605 2579 flags.go:64] FLAG: --keep-terminated-pod-volumes="false" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.282623 2579 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.282640 2579 flags.go:64] FLAG: --kube-api-burst="100" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.282659 2579 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.282678 2579 flags.go:64] FLAG: --kube-api-qps="50" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.282696 2579 flags.go:64] FLAG: --kube-reserved="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.282716 2579 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.282733 2579 flags.go:64] FLAG: --kubeconfig="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.282750 2579 flags.go:64] FLAG: --kubelet-cgroups="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.282768 2579 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.282787 2579 flags.go:64] FLAG: --lock-file="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.282804 2579 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.282822 2579 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.282840 2579 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.282868 2579 flags.go:64] FLAG: --log-json-split-stream="false" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.282885 2579 flags.go:64] FLAG: --logging-format="text" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.282903 2579 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283078 2579 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283106 2579 flags.go:64] FLAG: --manifest-url="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283124 2579 flags.go:64] FLAG: --manifest-url-header="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283183 2579 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283323 2579 flags.go:64] FLAG: --max-open-files="1000000" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283356 2579 flags.go:64] FLAG: --max-pods="110" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283377 2579 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283400 2579 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283423 2579 flags.go:64] FLAG: --memory-manager-policy="None" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283461 2579 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283502 2579 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283534 2579 flags.go:64] FLAG: --node-ip="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283553 2579 flags.go:64] FLAG: --node-labels="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283577 2579 flags.go:64] FLAG: --node-status-max-images="50" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283595 2579 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283613 2579 flags.go:64] FLAG: --oom-score-adj="-999" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283631 2579 flags.go:64] FLAG: --pod-cidr="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283649 2579 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcc1d762ed74e1eb6027355a2e6cc3933bd7b35cee9d6235de0fbe2d2958b0c2" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283681 2579 flags.go:64] FLAG: --pod-manifest-path="/etc/kubernetes/manifests" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283700 2579 flags.go:64] FLAG: --pod-max-pids="-1" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283718 2579 flags.go:64] FLAG: --pods-per-core="0" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283751 2579 flags.go:64] FLAG: --port="10250" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283775 2579 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283794 2579 flags.go:64] FLAG: --provider-id="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283812 2579 flags.go:64] FLAG: --qos-reserved="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283831 2579 flags.go:64] FLAG: --read-only-port="10255" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283852 2579 flags.go:64] FLAG: --register-node="true" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283870 2579 flags.go:64] FLAG: --register-schedulable="true" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283888 2579 flags.go:64] FLAG: --register-with-taints="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.283909 2579 flags.go:64] FLAG: --registry-burst="10" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284084 2579 flags.go:64] FLAG: --registry-qps="5" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284107 2579 flags.go:64] FLAG: --reserved-cpus="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284128 2579 flags.go:64] FLAG: --reserved-memory="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284153 2579 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284172 2579 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284190 2579 flags.go:64] FLAG: --rotate-certificates="false" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284207 2579 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284336 2579 flags.go:64] FLAG: --runonce="false" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284356 2579 flags.go:64] FLAG: --runtime-cgroups="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284374 2579 flags.go:64] FLAG: --runtime-request-timeout="10m0s" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284397 2579 flags.go:64] FLAG: --seccomp-default="false" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284417 2579 flags.go:64] FLAG: --serialize-image-pulls="false" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284435 2579 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284459 2579 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284491 2579 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284533 2579 flags.go:64] FLAG: --storage-driver-password="root" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284570 2579 flags.go:64] FLAG: --storage-driver-secure="false" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284592 2579 flags.go:64] FLAG: --storage-driver-table="stats" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284620 2579 flags.go:64] FLAG: --storage-driver-user="root" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284638 2579 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284658 2579 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284677 2579 flags.go:64] FLAG: --system-cgroups="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284696 2579 flags.go:64] FLAG: --system-reserved="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284716 2579 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284733 2579 flags.go:64] FLAG: --tls-cert-file="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284750 2579 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284779 2579 flags.go:64] FLAG: --tls-min-version="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284797 2579 flags.go:64] FLAG: --tls-private-key-file="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284814 2579 flags.go:64] FLAG: --topology-manager-policy="none" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284832 2579 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284850 2579 flags.go:64] FLAG: --topology-manager-scope="container" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284868 2579 flags.go:64] FLAG: --v="2" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.284891 2579 flags.go:64] FLAG: --version="false" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.285080 2579 flags.go:64] FLAG: --vmodule="" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.285120 2579 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.285142 2579 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 16 20:37:40 localhost.localdomain kubelet.sh[2579]: I0116 20:37:40.285450 2579 feature_gate.go:250] feature gates: &{map[]} Jan 16 20:37:40 localhost.localdomain systemd[1]: run-runc-15c3d90ffb8e4a15a99bb5d684a7a495861cc4dfa1531c13457471707ba7efe6-runc.Mgh2qU.mount: Deactivated successfully. Jan 16 20:37:40 localhost.localdomain systemd[1]: libpod-15c3d90ffb8e4a15a99bb5d684a7a495861cc4dfa1531c13457471707ba7efe6.scope: Deactivated successfully. Jan 16 20:37:42 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-15c3d90ffb8e4a15a99bb5d684a7a495861cc4dfa1531c13457471707ba7efe6-userdata-shm.mount: Deactivated successfully. Jan 16 20:37:42 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-7c6ae5dab7d412063f39ad9dfee4d7808c92ef84d089bd081bfa3452cbe15301-merged.mount: Deactivated successfully. Jan 16 20:37:43 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:37:43 localhost.localdomain build-ironic-env.sh[2352]: CUSTOMIZATION_IMAGE="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f8538a98c875dd9a885575cb5b0d590d2ead2ff849d40d0ebeb7539deb69651" Jan 16 20:37:43 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.461863 2579 server.go:415] "Kubelet version" kubeletVersion="v1.27.6+f67aeb3" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.462663 2579 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.462802 2579 feature_gate.go:250] feature gates: &{map[]} Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.463327 2579 feature_gate.go:250] feature gates: &{map[]} Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.476131 2579 server.go:578] "Standalone mode, no API client" Jan 16 20:37:43 localhost.localdomain systemd[1948]: Starting D-Bus User Message Bus... Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.505257 2579 fs.go:133] Filesystem UUIDs: map[45961086-2073-4e1e-9449-be5affdc08c1:/dev/vda4 5256ed23-0bf9-4f14-8749-ad59e0e9a846:/dev/vda3 66BC-6698:/dev/vda2] Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.506091 2579 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:23 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:25 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0}] Jan 16 20:37:43 localhost.localdomain systemd[1948]: Started D-Bus User Message Bus. Jan 16 20:37:43 localhost.localdomain dbus-broker-launch[3218]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored Jan 16 20:37:43 localhost.localdomain dbus-broker-launch[3218]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored Jan 16 20:37:43 localhost.localdomain dbus-broker-lau[3218]: Ready Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.549819 2579 manager.go:210] Machine: {Timestamp:2024-01-16 20:37:43.547292545 +0000 UTC m=+4.317699511 CPUVendorID:GenuineIntel NumCores:4 NumPhysicalCores:1 NumSockets:4 CpuFrequency:1999998 MemoryCapacity:6203097088 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:efce106942834b6e8dcf2db4b261dcf3 SystemUUID:efce1069-4283-4b6e-8dcf-2db4b261dcf3 BootID:08825aa6-a3fb-4095-b0cb-b7310a6e5446 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:620306432 Type:vfs Inodes:151442 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:23 Capacity:3101548544 Type:vfs Inodes:757214 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:25 Capacity:1240621056 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:33754689536 Type:vfs Inodes:16514496 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:3101548544 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:34359738368 Scheduler:mq-deadline}] NetworkDevices:[{Name:ens3 MacAddress:52:54:00:be:ad:32 Speed:-1 Mtu:1500} {Name:ens4 MacAddress:52:54:00:5a:66:3f Speed:-1 Mtu:1500}] Topology:[{Id:0 Memory:6203097088 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:4194304 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:4194304 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:4194304 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:4194304 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.551494 2579 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.552774 2579 manager.go:226] Version: {KernelVersion:5.14.0-284.36.1.el9_2.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 414.92.202310210434-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 16 20:37:43 localhost.localdomain systemd[1948]: Created slice Slice /user. Jan 16 20:37:43 localhost.localdomain systemd[1948]: podman-2618.scope: unit configures an IP firewall, but not running as root. Jan 16 20:37:43 localhost.localdomain systemd[1948]: (This warning is only shown for the first unit using IP firewalling.) Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.558476 2579 server.go:466] "No api server defined - no events will be sent to API server" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.559184 2579 server.go:668] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.561068 2579 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.561628 2579 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.562276 2579 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.563219 2579 container_manager_linux.go:304] "Creating device plugin manager" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.563768 2579 manager.go:135] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.565610 2579 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.566308 2579 state_mem.go:36] "Initialized new in-memory state store" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.567283 2579 util_unix.go:103] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/var/run/crio/crio.sock" URL="unix:///var/run/crio/crio.sock" Jan 16 20:37:43 localhost.localdomain systemd[1948]: Started podman-2618.scope. Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.660317 2579 remote_runtime.go:126] "Validated CRI v1 runtime API" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.660457 2579 util_unix.go:103] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/var/run/crio/crio.sock" URL="unix:///var/run/crio/crio.sock" Jan 16 20:37:43 localhost.localdomain systemd[1948]: Started podman-pause-5cb506f5.scope. Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.673184 2579 remote_image.go:98] "Validated CRI v1 image API" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.673403 2579 server.go:1139] "Using root directory" path="/var/lib/kubelet" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.676383 2579 kubelet.go:420] "Kubelet is running in standalone mode, will skip API server sync" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.676616 2579 kubelet.go:307] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.678306 2579 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.689237 2579 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="cri-o" version="1.27.1-8.1.rhaos4.14.git3fecb83.el9" apiVersion="v1" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.693193 2579 volume_host.go:75] "KubeClient is nil. Skip initialization of CSIDriverLister" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.694098 2579 plugins.go:639] "Loaded volume plugin" pluginName="kubernetes.io/gce-pd" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.694256 2579 plugins.go:639] "Loaded volume plugin" pluginName="kubernetes.io/azure-file" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.694305 2579 plugins.go:639] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.694402 2579 plugins.go:639] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.694711 2579 plugins.go:639] "Loaded volume plugin" pluginName="kubernetes.io/rbd" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.696101 2579 plugins.go:639] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.696278 2579 plugins.go:639] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.696343 2579 plugins.go:639] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.696445 2579 plugins.go:639] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.696601 2579 plugins.go:639] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.696683 2579 plugins.go:639] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.696739 2579 plugins.go:639] "Loaded volume plugin" pluginName="kubernetes.io/cephfs" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.696779 2579 plugins.go:639] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.696835 2579 plugins.go:639] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.697245 2579 plugins.go:639] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.697313 2579 plugins.go:639] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.697371 2579 plugins.go:639] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: W0116 20:37:43.697567 2579 csi_plugin.go:189] kubernetes.io/csi: kubeclient not set, assuming standalone kubelet Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: W0116 20:37:43.697638 2579 csi_plugin.go:266] Skipping CSINode initialization, kubelet running in standalone mode Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.697674 2579 plugins.go:639] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.699755 2579 server.go:1174] "Started kubelet" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.700447 2579 kubelet.go:1621] "No API server defined - no node status update will be sent" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: E0116 20:37:43.700813 2579 kubelet.go:1476] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.702605 2579 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.702785 2579 server.go:194] "Starting to listen read-only" address="0.0.0.0" port=10255 Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.705747 2579 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jan 16 20:37:43 localhost.localdomain systemd[1]: Started Kubernetes Kubelet. Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.714384 2579 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.718141 2579 server.go:461] "Adding debug handlers to kubelet server" Jan 16 20:37:43 localhost.localdomain systemd[1]: Started Bootstrap a Kubernetes cluster. Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.725900 2579 volume_manager.go:288] "The desired_state_of_world populator starts" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.726098 2579 volume_manager.go:290] "Starting Kubelet Volume Manager" Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.726619 2579 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 16 20:37:43 localhost.localdomain systemd[1]: Started Approve CSRs during bootstrap phase. Jan 16 20:37:43 localhost.localdomain crio[2304]: time="2024-01-16 20:37:43.761416048Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcc1d762ed74e1eb6027355a2e6cc3933bd7b35cee9d6235de0fbe2d2958b0c2" id=c140f9dd-6a68-418e-ad00-dd2b9c934dc6 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:37:43 localhost.localdomain crio[2304]: time="2024-01-16 20:37:43.762335890Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcc1d762ed74e1eb6027355a2e6cc3933bd7b35cee9d6235de0fbe2d2958b0c2 not found" id=c140f9dd-6a68-418e-ad00-dd2b9c934dc6 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.790860 2579 factory.go:153] Registering CRI-O factory Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.792014 2579 factory.go:55] Registering systemd factory Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.793360 2579 factory.go:103] Registering Raw factory Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.796655 2579 manager.go:1186] Started watching for new ooms in manager Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.822287 2579 manager.go:299] Starting recovery of all containers Jan 16 20:37:43 localhost.localdomain kubelet.sh[2579]: I0116 20:37:43.899065 2579 manager.go:304] Recovery completed Jan 16 20:37:44 localhost.localdomain kubelet.sh[2579]: I0116 20:37:44.230136 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:37:44 localhost.localdomain kubelet.sh[2579]: I0116 20:37:44.235094 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:37:44 localhost.localdomain kubelet.sh[2579]: I0116 20:37:44.235233 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:37:44 localhost.localdomain kubelet.sh[2579]: I0116 20:37:44.235261 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:37:44 localhost.localdomain kubelet.sh[2579]: I0116 20:37:44.239326 2579 cpu_manager.go:215] "Starting CPU manager" policy="none" Jan 16 20:37:44 localhost.localdomain kubelet.sh[2579]: I0116 20:37:44.239418 2579 cpu_manager.go:216] "Reconciling" reconcilePeriod="10s" Jan 16 20:37:44 localhost.localdomain kubelet.sh[2579]: I0116 20:37:44.239462 2579 state_mem.go:36] "Initialized new in-memory state store" Jan 16 20:37:44 localhost.localdomain kubelet.sh[2579]: I0116 20:37:44.246709 2579 policy_none.go:49] "None policy: Start" Jan 16 20:37:44 localhost.localdomain kubelet.sh[2579]: I0116 20:37:44.253146 2579 memory_manager.go:169] "Starting memorymanager" policy="None" Jan 16 20:37:44 localhost.localdomain kubelet.sh[2579]: I0116 20:37:44.253215 2579 state_mem.go:35] "Initializing new in-memory state store" Jan 16 20:37:44 localhost.localdomain systemd[1]: Started libcontainer container 811eb2b385269528454207ed7ca7e3192b7b4269797d17e4097f7ef1980d84b8. Jan 16 20:37:44 localhost.localdomain kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled Jan 16 20:37:44 localhost.localdomain systemd[1]: Created slice libcontainer container kubepods.slice. Jan 16 20:37:44 localhost.localdomain kubelet.sh[2579]: I0116 20:37:44.354157 2579 manager.go:295] "Starting Device Plugin manager" Jan 16 20:37:44 localhost.localdomain kubelet.sh[2579]: I0116 20:37:44.354838 2579 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 20:37:44 localhost.localdomain kubelet.sh[2579]: I0116 20:37:44.354868 2579 server.go:79] "Starting device plugin registration server" Jan 16 20:37:44 localhost.localdomain kubelet.sh[2579]: I0116 20:37:44.356857 2579 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 16 20:37:44 localhost.localdomain kubelet.sh[2579]: I0116 20:37:44.357161 2579 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 16 20:37:44 localhost.localdomain kubelet.sh[2579]: I0116 20:37:44.357182 2579 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 20:37:44 localhost.localdomain kubelet.sh[2579]: I0116 20:37:44.359442 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:37:44 localhost.localdomain systemd[1]: Created slice libcontainer container kubepods-burstable.slice. Jan 16 20:37:44 localhost.localdomain systemd[1]: Created slice libcontainer container kubepods-besteffort.slice. Jan 16 20:37:44 localhost.localdomain kubelet.sh[2579]: I0116 20:37:44.371792 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:37:44 localhost.localdomain kubelet.sh[2579]: I0116 20:37:44.372698 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:37:44 localhost.localdomain kubelet.sh[2579]: I0116 20:37:44.373252 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:37:44 localhost.localdomain approve-csr.sh[3230]: Approving all CSR requests until bootstrapping is complete... Jan 16 20:37:44 localhost.localdomain kubelet.sh[2579]: I0116 20:37:44.447176 2579 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Jan 16 20:37:44 localhost.localdomain kubelet.sh[2579]: I0116 20:37:44.463644 2579 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Jan 16 20:37:44 localhost.localdomain kubelet.sh[2579]: I0116 20:37:44.464265 2579 status_manager.go:212] "Kubernetes client is nil, not starting status manager" Jan 16 20:37:44 localhost.localdomain kubelet.sh[2579]: I0116 20:37:44.464415 2579 kubelet.go:2339] "Starting kubelet main sync loop" Jan 16 20:37:44 localhost.localdomain kubelet.sh[2579]: E0116 20:37:44.464527 2579 kubelet.go:2363] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 16 20:37:44 localhost.localdomain systemd[1]: libpod-811eb2b385269528454207ed7ca7e3192b7b4269797d17e4097f7ef1980d84b8.scope: Deactivated successfully. Jan 16 20:37:44 localhost.localdomain kubelet.sh[2579]: I0116 20:37:44.565390 2579 kubelet.go:2425] "SyncLoop ADD" source="file" pods=[] Jan 16 20:37:44 localhost.localdomain kubelet.sh[2579]: I0116 20:37:44.636808 2579 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 16 20:37:44 localhost.localdomain kubelet.sh[2579]: I0116 20:37:44.648192 2579 reconciler.go:41] "Reconciler: start to sync state" Jan 16 20:37:44 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-811eb2b385269528454207ed7ca7e3192b7b4269797d17e4097f7ef1980d84b8-userdata-shm.mount: Deactivated successfully. Jan 16 20:37:44 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-1725030f5968c8729063c4379c5c9005f9737bc4599d888b1c00ecd046e7aa56-merged.mount: Deactivated successfully. Jan 16 20:37:44 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:37:44 localhost.localdomain build-ironic-env.sh[2352]: MACHINE_OS_IMAGES_IMAGE="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e6607ddc665654a569f992895df82828d5daef254925bafdb1e4e7e260b870e" Jan 16 20:37:44 localhost.localdomain systemd[1]: Started libcontainer container f7998d29550535077aef1c0013ce766aac054cc7d0eeeda9834da90c6b2de599. Jan 16 20:37:45 localhost.localdomain approve-csr.sh[3309]: E0116 20:37:45.164296 3309 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:37:45 localhost.localdomain approve-csr.sh[3309]: E0116 20:37:45.166492 3309 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:37:45 localhost.localdomain systemd[1]: libpod-f7998d29550535077aef1c0013ce766aac054cc7d0eeeda9834da90c6b2de599.scope: Deactivated successfully. Jan 16 20:37:45 localhost.localdomain approve-csr.sh[3309]: E0116 20:37:45.169670 3309 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:37:45 localhost.localdomain approve-csr.sh[3309]: E0116 20:37:45.172679 3309 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:37:45 localhost.localdomain approve-csr.sh[3309]: E0116 20:37:45.174471 3309 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:37:45 localhost.localdomain approve-csr.sh[3309]: The connection to the server localhost:6443 was refused - did you specify the right host or port? Jan 16 20:37:45 localhost.localdomain systemd[1]: Started libcontainer container e63ca987e5108cf73f95c89289935c3849ce2af6af9ade40d6d9d793c25c20ab. Jan 16 20:37:45 localhost.localdomain build-ironic-env.sh[3474]: Trying to pull quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:816ae46ba01e0135faad92068891d99e57e2817dd9f48128dd45fff7d0defde4... Jan 16 20:37:45 localhost.localdomain systemd[1]: libpod-e63ca987e5108cf73f95c89289935c3849ce2af6af9ade40d6d9d793c25c20ab.scope: Deactivated successfully. Jan 16 20:37:45 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e63ca987e5108cf73f95c89289935c3849ce2af6af9ade40d6d9d793c25c20ab-userdata-shm.mount: Deactivated successfully. Jan 16 20:37:45 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-f8760a672a5d4dc84cde2797ffde83029dccc0e86def6542bf010bc981629168-merged.mount: Deactivated successfully. Jan 16 20:37:45 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:37:46 localhost.localdomain systemd[1]: Started libcontainer container 116dda7811399e11a374d13f5be9636bfe0906d463d3849cffb17dc29b0d595f. Jan 16 20:37:46 localhost.localdomain systemd[1]: libpod-116dda7811399e11a374d13f5be9636bfe0906d463d3849cffb17dc29b0d595f.scope: Deactivated successfully. Jan 16 20:37:46 localhost.localdomain sudo[3577]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps Jan 16 20:37:46 localhost.localdomain sudo[3577]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:37:46 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-116dda7811399e11a374d13f5be9636bfe0906d463d3849cffb17dc29b0d595f-userdata-shm.mount: Deactivated successfully. Jan 16 20:37:46 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-1071e5fa4be390f69000c2914f7a4935f7c2fb0a5881239638c149fa6d658cee-merged.mount: Deactivated successfully. Jan 16 20:37:46 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:37:46 localhost.localdomain sudo[3577]: pam_unix(sudo:session): session closed for user root Jan 16 20:37:47 localhost.localdomain systemd[1]: Started libcontainer container 21b011625d094f9fab0d4ae3b5b7919cecc009b5e17585d2207b3fb0332a5c10. Jan 16 20:37:47 localhost.localdomain systemd[1]: libpod-21b011625d094f9fab0d4ae3b5b7919cecc009b5e17585d2207b3fb0332a5c10.scope: Deactivated successfully. Jan 16 20:37:47 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-21b011625d094f9fab0d4ae3b5b7919cecc009b5e17585d2207b3fb0332a5c10-userdata-shm.mount: Deactivated successfully. Jan 16 20:37:47 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-0866b0166aabf14025e1ae15b25c115ed0306d5e4c088469c64dc263edb31200-merged.mount: Deactivated successfully. Jan 16 20:37:48 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:37:48 localhost.localdomain systemd[1]: Started libcontainer container 9d836d25d06b3da89aee60ea926ca58bf20a0c0bcf8e6603fe96d56a784c1421. Jan 16 20:37:48 localhost.localdomain systemd[1]: libpod-9d836d25d06b3da89aee60ea926ca58bf20a0c0bcf8e6603fe96d56a784c1421.scope: Deactivated successfully. Jan 16 20:37:49 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9d836d25d06b3da89aee60ea926ca58bf20a0c0bcf8e6603fe96d56a784c1421-userdata-shm.mount: Deactivated successfully. Jan 16 20:37:49 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-16e5a1065c1ed57076b73930f0de682e3129dcba0907bc67b0a7f95bd368d831-merged.mount: Deactivated successfully. Jan 16 20:37:49 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:37:49 localhost.localdomain build-ironic-env.sh[3474]: Getting image source signatures Jan 16 20:37:49 localhost.localdomain build-ironic-env.sh[3474]: Copying blob sha256:44225c7147e7fcd3e1376dc06b74ec71b4d8a5a8d904ba6bf04b8bd642e09335 Jan 16 20:37:49 localhost.localdomain build-ironic-env.sh[3474]: Copying blob sha256:ca1636478fe5b8e2a56600e24d6759147feb15020824334f4a798c1cb6ed58e2 Jan 16 20:37:49 localhost.localdomain build-ironic-env.sh[3474]: Copying blob sha256:616c141c625514839c52fddb8148daa99218deecad9f1c5717beeaac3af20b4b Jan 16 20:37:49 localhost.localdomain systemd[1]: Started libcontainer container 3088f1551ae65bd24f3ca9e107262033f13ccec721060e23683c899985925dab. Jan 16 20:37:49 localhost.localdomain sudo[3781]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps -a Jan 16 20:37:49 localhost.localdomain sudo[3781]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:37:50 localhost.localdomain sudo[3781]: pam_unix(sudo:session): session closed for user root Jan 16 20:37:50 localhost.localdomain systemd[1]: libpod-3088f1551ae65bd24f3ca9e107262033f13ccec721060e23683c899985925dab.scope: Deactivated successfully. Jan 16 20:37:50 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3088f1551ae65bd24f3ca9e107262033f13ccec721060e23683c899985925dab-userdata-shm.mount: Deactivated successfully. Jan 16 20:37:50 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-6899045be589a27212d1cc8fe9020b5e291026cf07663d5ef7dca4d2939d1789-merged.mount: Deactivated successfully. Jan 16 20:37:51 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:37:51 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:37:51 localhost.localdomain NetworkManager[1706]: [1705437471.3258] policy: set-hostname: set hostname to 'localhost.localdomain' (no hostname found) Jan 16 20:37:51 localhost.localdomain systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 16 20:37:51 localhost.localdomain NetworkManager[1706]: [1705437471.3334] hostname: couldn't set the system hostname to 'localhost.localdomain' using hostnamed: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name is not activatable Jan 16 20:37:51 localhost.localdomain NetworkManager[1706]: [1705437471.3338] policy: set-hostname: couldn't set the system hostname to 'localhost.localdomain': (1) Operation not permitted Jan 16 20:37:51 localhost.localdomain NetworkManager[1706]: [1705437471.3341] policy: set-hostname: you should use hostnamed when systemd hardening is in effect! Jan 16 20:37:51 localhost.localdomain systemd[1]: Started libcontainer container bcf32b04a86660971bc79733e2458d667903f85631deec607379aa96701af6d0. Jan 16 20:37:52 localhost.localdomain systemd[1]: libpod-bcf32b04a86660971bc79733e2458d667903f85631deec607379aa96701af6d0.scope: Deactivated successfully. Jan 16 20:37:52 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-7179ed9d2561e632277487a68919d32fce39950f10b09964d0dc8d4ec0210fc7-merged.mount: Deactivated successfully. Jan 16 20:37:52 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bcf32b04a86660971bc79733e2458d667903f85631deec607379aa96701af6d0-userdata-shm.mount: Deactivated successfully. Jan 16 20:37:52 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:37:52 localhost.localdomain systemd[1]: Started libcontainer container 7243eb0f99d5ab2e0573343310547797776d8352bf120884fae903f46e38c32c. Jan 16 20:37:53 localhost.localdomain systemd[1]: libpod-7243eb0f99d5ab2e0573343310547797776d8352bf120884fae903f46e38c32c.scope: Deactivated successfully. Jan 16 20:37:53 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7243eb0f99d5ab2e0573343310547797776d8352bf120884fae903f46e38c32c-userdata-shm.mount: Deactivated successfully. Jan 16 20:37:53 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-c3852f4ada0c1b23312a042a3049ab426d1232f75f7ef6e94c98562a3a0d382b-merged.mount: Deactivated successfully. Jan 16 20:37:53 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:37:53 localhost.localdomain systemd[1]: Started libcontainer container 45f1c35ae8d2c4462e2ade7eea22eeb9741a023ba3208fbdef1d560307e0a001. Jan 16 20:37:53 localhost.localdomain systemd[1]: libpod-45f1c35ae8d2c4462e2ade7eea22eeb9741a023ba3208fbdef1d560307e0a001.scope: Deactivated successfully. Jan 16 20:37:54 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-45f1c35ae8d2c4462e2ade7eea22eeb9741a023ba3208fbdef1d560307e0a001-userdata-shm.mount: Deactivated successfully. Jan 16 20:37:54 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-4190aaf06a7e57df3237e31a61aff51d4ff683117de7d8f61561d2aea8e1957d-merged.mount: Deactivated successfully. Jan 16 20:37:54 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:37:54 localhost.localdomain kubelet.sh[2579]: I0116 20:37:54.399348 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:37:54 localhost.localdomain kubelet.sh[2579]: I0116 20:37:54.408520 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:37:54 localhost.localdomain kubelet.sh[2579]: I0116 20:37:54.410520 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:37:54 localhost.localdomain kubelet.sh[2579]: I0116 20:37:54.411488 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:37:54 localhost.localdomain systemd[1]: Started libcontainer container 011a331992e3ffe2333f0610336c488b5263703d7b667e9b4014c4e5aa3ea1d2. Jan 16 20:37:54 localhost.localdomain systemd[1]: libpod-011a331992e3ffe2333f0610336c488b5263703d7b667e9b4014c4e5aa3ea1d2.scope: Deactivated successfully. Jan 16 20:37:55 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-011a331992e3ffe2333f0610336c488b5263703d7b667e9b4014c4e5aa3ea1d2-userdata-shm.mount: Deactivated successfully. Jan 16 20:37:55 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-600ff8b7a53a00fe29a7cc4cd924a8ec4b8cac498315c84dda6ea3769ea7726b-merged.mount: Deactivated successfully. Jan 16 20:37:55 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:37:55 localhost.localdomain systemd[1]: Started libcontainer container e8fd5390c727b10940e87675b240605e2c6339abecfbbf422e7566b892118f1c. Jan 16 20:37:56 localhost.localdomain systemd[1]: libpod-e8fd5390c727b10940e87675b240605e2c6339abecfbbf422e7566b892118f1c.scope: Deactivated successfully. Jan 16 20:37:56 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e8fd5390c727b10940e87675b240605e2c6339abecfbbf422e7566b892118f1c-userdata-shm.mount: Deactivated successfully. Jan 16 20:37:56 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-7569c1abd5124ffaa69cfa9e03c2b5c7086250f7199d4e92355d219f56578593-merged.mount: Deactivated successfully. Jan 16 20:37:56 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:37:56 localhost.localdomain systemd[1]: Started libcontainer container b3a99c65722779fc4760c4b5544fba825cb90c7740e62732b2485e114aa507e9. Jan 16 20:37:57 localhost.localdomain systemd[1]: libpod-b3a99c65722779fc4760c4b5544fba825cb90c7740e62732b2485e114aa507e9.scope: Deactivated successfully. Jan 16 20:37:57 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b3a99c65722779fc4760c4b5544fba825cb90c7740e62732b2485e114aa507e9-userdata-shm.mount: Deactivated successfully. Jan 16 20:37:57 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-36492c39fd74a2d5e4e4c25e4579908d94ad3d49af20ab0e75caa18a396c77b9-merged.mount: Deactivated successfully. Jan 16 20:37:57 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:37:57 localhost.localdomain systemd[1]: Started libcontainer container 44e1ab149a70f1362f2bb116b3e37d0b1e142fb6f6a2bf53b81081ea3f382928. Jan 16 20:37:58 localhost.localdomain systemd[1]: libpod-44e1ab149a70f1362f2bb116b3e37d0b1e142fb6f6a2bf53b81081ea3f382928.scope: Deactivated successfully. Jan 16 20:37:58 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-44e1ab149a70f1362f2bb116b3e37d0b1e142fb6f6a2bf53b81081ea3f382928-userdata-shm.mount: Deactivated successfully. Jan 16 20:37:58 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-3fea9ddd7984bb547e6214ffb20edf87b72e690120fc622581ae176bb7f66c73-merged.mount: Deactivated successfully. Jan 16 20:37:58 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:37:58 localhost.localdomain systemd[1]: Started libcontainer container 740ff5e47e8f521e890316240d68520b22623df7ca26ef15be360f86fbc4ea5c. Jan 16 20:37:59 localhost.localdomain systemd[1]: libpod-740ff5e47e8f521e890316240d68520b22623df7ca26ef15be360f86fbc4ea5c.scope: Deactivated successfully. Jan 16 20:37:59 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-740ff5e47e8f521e890316240d68520b22623df7ca26ef15be360f86fbc4ea5c-userdata-shm.mount: Deactivated successfully. Jan 16 20:37:59 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-fc41e6311797090470ba5330eaae8f6360527929103525a7358a0917bd96777b-merged.mount: Deactivated successfully. Jan 16 20:37:59 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:37:59 localhost.localdomain systemd[1]: Started libcontainer container 14e14781aeac3a5e228a653f779c6be6a0cd9992306ee2ac115850dd3858c6d1. Jan 16 20:37:59 localhost.localdomain systemd[1]: libpod-14e14781aeac3a5e228a653f779c6be6a0cd9992306ee2ac115850dd3858c6d1.scope: Deactivated successfully. Jan 16 20:38:00 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-14e14781aeac3a5e228a653f779c6be6a0cd9992306ee2ac115850dd3858c6d1-userdata-shm.mount: Deactivated successfully. Jan 16 20:38:00 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-144bd9c9787cd4abe809399c6d6ca0af6b488417ffdc4063ad4a5cfb0e17e5ec-merged.mount: Deactivated successfully. Jan 16 20:38:00 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:38:00 localhost.localdomain systemd[1]: Started libcontainer container 230cc40d3c75bf1829898aef4fcb292b7f512f3d1b7feef47ebe5f6242f0eb44. Jan 16 20:38:00 localhost.localdomain systemd[1]: libpod-230cc40d3c75bf1829898aef4fcb292b7f512f3d1b7feef47ebe5f6242f0eb44.scope: Deactivated successfully. Jan 16 20:38:00 localhost.localdomain sudo[4505]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps Jan 16 20:38:00 localhost.localdomain sudo[4505]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:38:01 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-230cc40d3c75bf1829898aef4fcb292b7f512f3d1b7feef47ebe5f6242f0eb44-userdata-shm.mount: Deactivated successfully. Jan 16 20:38:01 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-fe5710745ccf09c852fe1c1c576629e3b879015ab3ae8eccd432003f0e671726-merged.mount: Deactivated successfully. Jan 16 20:38:01 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:38:01 localhost.localdomain sudo[4505]: pam_unix(sudo:session): session closed for user root Jan 16 20:38:01 localhost.localdomain bootkube.sh[3228]: Moving OpenShift manifests in with the rest of them Jan 16 20:38:01 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:38:02 localhost.localdomain sudo[4564]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps -a Jan 16 20:38:02 localhost.localdomain sudo[4564]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:38:03 localhost.localdomain sudo[4564]: pam_unix(sudo:session): session closed for user root Jan 16 20:38:03 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:38:04 localhost.localdomain bootkube.sh[3228]: Rendering cluster config manifests... Jan 16 20:38:04 localhost.localdomain kubelet.sh[2579]: I0116 20:38:04.443133 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:38:04 localhost.localdomain kubelet.sh[2579]: I0116 20:38:04.450447 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:38:04 localhost.localdomain kubelet.sh[2579]: I0116 20:38:04.450681 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:38:04 localhost.localdomain kubelet.sh[2579]: I0116 20:38:04.450786 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:38:05 localhost.localdomain approve-csr.sh[4601]: E0116 20:38:05.561685 4601 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:38:05 localhost.localdomain approve-csr.sh[4601]: E0116 20:38:05.563565 4601 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:38:05 localhost.localdomain approve-csr.sh[4601]: E0116 20:38:05.564482 4601 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:38:05 localhost.localdomain approve-csr.sh[4601]: E0116 20:38:05.566605 4601 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:38:05 localhost.localdomain approve-csr.sh[4601]: E0116 20:38:05.567683 4601 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:38:05 localhost.localdomain approve-csr.sh[4601]: The connection to the server localhost:6443 was refused - did you specify the right host or port? Jan 16 20:38:05 localhost.localdomain sudo[4613]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps -a Jan 16 20:38:05 localhost.localdomain sudo[4613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:38:06 localhost.localdomain NetworkManager[1706]: [1705437486.2762] device (ens4): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 16 20:38:06 localhost.localdomain NetworkManager[1706]: [1705437486.2778] device (ens4): Activation: failed for connection 'Wired connection 2' Jan 16 20:38:06 localhost.localdomain NetworkManager[1706]: [1705437486.2784] device (ens4): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 16 20:38:06 localhost.localdomain NetworkManager[1706]: [1705437486.2812] dhcp4 (ens4): canceled DHCP transaction Jan 16 20:38:06 localhost.localdomain NetworkManager[1706]: [1705437486.2813] dhcp4 (ens4): activation: beginning transaction (timeout in 45 seconds) Jan 16 20:38:06 localhost.localdomain NetworkManager[1706]: [1705437486.2813] dhcp4 (ens4): state changed no lease Jan 16 20:38:06 localhost.localdomain NetworkManager[1706]: [1705437486.2844] policy: auto-activating connection 'Wired connection 2' (01a00a81-2a9f-354e-9949-a4ae9b243f35) Jan 16 20:38:06 localhost.localdomain NetworkManager[1706]: [1705437486.2854] device (ens4): Activation: starting connection 'Wired connection 2' (01a00a81-2a9f-354e-9949-a4ae9b243f35) Jan 16 20:38:06 localhost.localdomain NetworkManager[1706]: [1705437486.2857] device (ens4): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 16 20:38:06 localhost.localdomain NetworkManager[1706]: [1705437486.2867] device (ens4): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 16 20:38:06 localhost.localdomain NetworkManager[1706]: [1705437486.2943] device (ens4): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 16 20:38:06 localhost.localdomain NetworkManager[1706]: [1705437486.2985] dhcp4 (ens4): activation: beginning transaction (timeout in 45 seconds) Jan 16 20:38:14 localhost.localdomain kubelet.sh[2579]: I0116 20:38:14.505193 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:38:14 localhost.localdomain kubelet.sh[2579]: I0116 20:38:14.512456 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:38:14 localhost.localdomain kubelet.sh[2579]: I0116 20:38:14.512565 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:38:14 localhost.localdomain kubelet.sh[2579]: I0116 20:38:14.512610 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:38:17 localhost.localdomain sudo[4613]: pam_unix(sudo:session): session closed for user root Jan 16 20:38:17 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:38:23 localhost.localdomain sudo[4708]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps Jan 16 20:38:23 localhost.localdomain sudo[4708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:38:24 localhost.localdomain kubelet.sh[2579]: I0116 20:38:24.555317 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:38:24 localhost.localdomain kubelet.sh[2579]: I0116 20:38:24.560459 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:38:24 localhost.localdomain kubelet.sh[2579]: I0116 20:38:24.560645 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:38:24 localhost.localdomain kubelet.sh[2579]: I0116 20:38:24.560706 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:38:25 localhost.localdomain approve-csr.sh[4719]: E0116 20:38:25.953823 4719 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:38:25 localhost.localdomain approve-csr.sh[4719]: E0116 20:38:25.954718 4719 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:38:25 localhost.localdomain approve-csr.sh[4719]: E0116 20:38:25.960465 4719 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:38:25 localhost.localdomain approve-csr.sh[4719]: E0116 20:38:25.962609 4719 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:38:25 localhost.localdomain approve-csr.sh[4719]: E0116 20:38:25.963892 4719 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:38:25 localhost.localdomain approve-csr.sh[4719]: The connection to the server localhost:6443 was refused - did you specify the right host or port? Jan 16 20:38:31 localhost.localdomain build-ironic-env.sh[3474]: Copying config sha256:1018c21337e38bf5b4258efcd4214f431401a9544a5bd19d9f0d1e03ee98fcd3 Jan 16 20:38:31 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:38:31 localhost.localdomain sudo[4708]: pam_unix(sudo:session): session closed for user root Jan 16 20:38:31 localhost.localdomain build-ironic-env.sh[3474]: Writing manifest to image destination Jan 16 20:38:31 localhost.localdomain build-ironic-env.sh[3474]: Storing signatures Jan 16 20:38:32 localhost.localdomain kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 16 20:38:32 localhost.localdomain NetworkManager[1706]: [1705437512.4415] manager: (cni-podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/4) Jan 16 20:38:32 localhost.localdomain NetworkManager[1706]: [1705437512.4958] manager: (veth6e889f5c): new Veth device (/org/freedesktop/NetworkManager/Devices/5) Jan 16 20:38:32 localhost.localdomain kernel: cni-podman0: port 1(veth6e889f5c) entered blocking state Jan 16 20:38:32 localhost.localdomain kernel: cni-podman0: port 1(veth6e889f5c) entered disabled state Jan 16 20:38:32 localhost.localdomain kernel: device veth6e889f5c entered promiscuous mode Jan 16 20:38:32 localhost.localdomain NetworkManager[1706]: [1705437512.5274] device (cni-podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Jan 16 20:38:32 localhost.localdomain NetworkManager[1706]: [1705437512.5326] device (cni-podman0): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Jan 16 20:38:32 localhost.localdomain NetworkManager[1706]: [1705437512.5384] device (cni-podman0): Activation: starting connection 'cni-podman0' (cefb1570-cd80-4bc8-bf95-3171392c3c35) Jan 16 20:38:32 localhost.localdomain NetworkManager[1706]: [1705437512.5395] device (cni-podman0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Jan 16 20:38:32 localhost.localdomain NetworkManager[1706]: [1705437512.5508] device (cni-podman0): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Jan 16 20:38:32 localhost.localdomain NetworkManager[1706]: [1705437512.5531] device (cni-podman0): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Jan 16 20:38:32 localhost.localdomain NetworkManager[1706]: [1705437512.5565] device (cni-podman0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Jan 16 20:38:32 localhost.localdomain systemd[1]: Starting Network Manager Script Dispatcher Service... Jan 16 20:38:32 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jan 16 20:38:32 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth6e889f5c: link becomes ready Jan 16 20:38:32 localhost.localdomain kernel: cni-podman0: port 1(veth6e889f5c) entered blocking state Jan 16 20:38:32 localhost.localdomain kernel: cni-podman0: port 1(veth6e889f5c) entered forwarding state Jan 16 20:38:32 localhost.localdomain NetworkManager[1706]: [1705437512.6132] device (veth6e889f5c): carrier: link connected Jan 16 20:38:32 localhost.localdomain NetworkManager[1706]: [1705437512.6153] device (cni-podman0): carrier: link connected Jan 16 20:38:32 localhost.localdomain systemd[1]: Started Network Manager Script Dispatcher Service. Jan 16 20:38:32 localhost.localdomain NetworkManager[1706]: [1705437512.6442] device (cni-podman0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Jan 16 20:38:32 localhost.localdomain NetworkManager[1706]: [1705437512.6458] device (cni-podman0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Jan 16 20:38:32 localhost.localdomain NetworkManager[1706]: [1705437512.6498] device (cni-podman0): Activation: successful, device activated. Jan 16 20:38:32 localhost.localdomain systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive. Jan 16 20:38:32 localhost.localdomain root[4787]: NM local-dns-prepender triggered by cni-podman0 up. Jan 16 20:38:32 localhost.localdomain nm-dispatcher[4787]: <13>Jan 16 20:38:32 root: NM local-dns-prepender triggered by cni-podman0 up. Jan 16 20:38:32 localhost.localdomain nm-dispatcher[4797]: Failed to get unit file state for systemd-resolved.service: No such file or directory Jan 16 20:38:32 localhost.localdomain root[4803]: NM local-dns-prepender: Checking if local DNS IP is the first entry in resolv.conf Jan 16 20:38:32 localhost.localdomain nm-dispatcher[4803]: <13>Jan 16 20:38:32 root: NM local-dns-prepender: Checking if local DNS IP is the first entry in resolv.conf Jan 16 20:38:32 localhost.localdomain root[4807]: NM local-dns-prepender: local DNS IP already is the first entry in resolv.conf Jan 16 20:38:32 localhost.localdomain nm-dispatcher[4807]: <13>Jan 16 20:38:32 root: NM local-dns-prepender: local DNS IP already is the first entry in resolv.conf Jan 16 20:38:33 localhost.localdomain systemd[1]: Started libcontainer container a13e99013a910c6503b6753a93348f5624b63fd58ed93dc47f042ba2acbfba92. Jan 16 20:38:33 localhost.localdomain systemd[1]: libpod-a13e99013a910c6503b6753a93348f5624b63fd58ed93dc47f042ba2acbfba92.scope: Deactivated successfully. Jan 16 20:38:34 localhost.localdomain kernel: cni-podman0: port 1(veth6e889f5c) entered disabled state Jan 16 20:38:34 localhost.localdomain kernel: device veth6e889f5c left promiscuous mode Jan 16 20:38:34 localhost.localdomain kernel: cni-podman0: port 1(veth6e889f5c) entered disabled state Jan 16 20:38:34 localhost.localdomain systemd[1]: run-netns-netns\x2d7cb2214f\x2d0d3d\x2da504\x2df3e4\x2db20c4e08ec08.mount: Deactivated successfully. Jan 16 20:38:34 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a13e99013a910c6503b6753a93348f5624b63fd58ed93dc47f042ba2acbfba92-userdata-shm.mount: Deactivated successfully. Jan 16 20:38:34 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-669935088f89b2bd73267311e7b0e9dbb41e08a79172a01cf516aa906026a3da-merged.mount: Deactivated successfully. Jan 16 20:38:34 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Jan 16 20:38:34 localhost.localdomain build-ironic-env.sh[2352]: IRONIC_HTPASSWD="bootstrap-user:$2y$05$fNzr15rpu7SH2TsDBcaxD.4eidJD6LZuMSMmUCRk7fxaccfxEcxUq" Jan 16 20:38:34 localhost.localdomain build-ironic-env.sh[2352]: EXTERNAL_IP_OPTIONS="ip=dhcp" Jan 16 20:38:34 localhost.localdomain build-ironic-env.sh[2352]: PROVISIONING_IP_OPTIONS="ip=dhcp" Jan 16 20:38:34 localhost.localdomain build-ironic-env.sh[2352]: IRONIC_KERNEL_PARAMS="rd.net.timeout.carrier=30 ip=dhcp" Jan 16 20:38:34 localhost.localdomain systemd[1]: Finished Build Ironic environment. Jan 16 20:38:34 localhost.localdomain systemd[1]: Starting Extract Machine OS Images... Jan 16 20:38:34 localhost.localdomain systemd[1]: Starting Provisioning interface... Jan 16 20:38:34 localhost.localdomain kubelet.sh[2579]: I0116 20:38:34.603384 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:38:34 localhost.localdomain kubelet.sh[2579]: I0116 20:38:34.609483 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:38:34 localhost.localdomain kubelet.sh[2579]: I0116 20:38:34.609712 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:38:34 localhost.localdomain kubelet.sh[2579]: I0116 20:38:34.609748 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:38:34 localhost.localdomain sudo[4944]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps Jan 16 20:38:34 localhost.localdomain sudo[4944]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:38:34 localhost.localdomain NetworkManager[1706]: [1705437514.6737] audit: op="connection-add" uuid="9bac471c-e86b-4a85-8357-22465d65a94e" name="provisioning" pid=4945 uid=0 result="success" Jan 16 20:38:34 localhost.localdomain start-provisioning-nic.sh[4945]: Connection 'provisioning' (9bac471c-e86b-4a85-8357-22465d65a94e) successfully added. Jan 16 20:38:34 localhost.localdomain podman[4930]: Trying to pull quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e6607ddc665654a569f992895df82828d5daef254925bafdb1e4e7e260b870e... Jan 16 20:38:34 localhost.localdomain NetworkManager[1706]: [1705437514.7357] agent-manager: agent[54a987c22abdd2b7,:1.129/nmcli-connect/0]: agent registered Jan 16 20:38:34 localhost.localdomain NetworkManager[1706]: [1705437514.7408] device (ens4): disconnecting for new activation request. Jan 16 20:38:34 localhost.localdomain NetworkManager[1706]: [1705437514.7415] device (ens4): state change: ip-config -> deactivating (reason 'new-activation', sys-iface-state: 'managed') Jan 16 20:38:34 localhost.localdomain NetworkManager[1706]: [1705437514.7483] audit: op="connection-activate" uuid="9bac471c-e86b-4a85-8357-22465d65a94e" name="provisioning" pid=4970 uid=0 result="success" Jan 16 20:38:34 localhost.localdomain NetworkManager[1706]: [1705437514.7518] device (ens4): state change: deactivating -> disconnected (reason 'new-activation', sys-iface-state: 'managed') Jan 16 20:38:34 localhost.localdomain NetworkManager[1706]: [1705437514.7623] dhcp4 (ens4): canceled DHCP transaction Jan 16 20:38:34 localhost.localdomain NetworkManager[1706]: [1705437514.7624] dhcp4 (ens4): state changed no lease Jan 16 20:38:34 localhost.localdomain sudo[4944]: pam_unix(sudo:session): session closed for user root Jan 16 20:38:34 localhost.localdomain NetworkManager[1706]: [1705437514.7678] device (ens4): Activation: starting connection 'provisioning' (9bac471c-e86b-4a85-8357-22465d65a94e) Jan 16 20:38:34 localhost.localdomain NetworkManager[1706]: [1705437514.7716] device (ens4): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 16 20:38:34 localhost.localdomain NetworkManager[1706]: [1705437514.7722] device (ens4): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 16 20:38:34 localhost.localdomain NetworkManager[1706]: [1705437514.7736] device (ens4): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 16 20:38:34 localhost.localdomain NetworkManager[1706]: [1705437514.7777] device (ens4): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Jan 16 20:38:34 localhost.localdomain NetworkManager[1706]: [1705437514.7896] device (ens4): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Jan 16 20:38:34 localhost.localdomain NetworkManager[1706]: [1705437514.7901] device (ens4): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Jan 16 20:38:34 localhost.localdomain NetworkManager[1706]: [1705437514.7925] device (ens4): Activation: successful, device activated. Jan 16 20:38:34 localhost.localdomain start-provisioning-nic.sh[4970]: Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/8) Jan 16 20:38:34 localhost.localdomain NetworkManager[1706]: [1705437514.7955] manager: startup complete Jan 16 20:38:34 localhost.localdomain systemd[1]: Finished Provisioning interface. Jan 16 20:38:34 localhost.localdomain systemd[1]: Starting DHCP Service for Provisioning Network... Jan 16 20:38:34 localhost.localdomain systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive. Jan 16 20:38:35 localhost.localdomain root[5022]: NM local-dns-prepender triggered by ens4 up. Jan 16 20:38:35 localhost.localdomain nm-dispatcher[5022]: <13>Jan 16 20:38:35 root: NM local-dns-prepender triggered by ens4 up. Jan 16 20:38:35 localhost.localdomain nm-dispatcher[5029]: Failed to get unit file state for systemd-resolved.service: No such file or directory Jan 16 20:38:35 localhost.localdomain root[5030]: NM local-dns-prepender: Checking if local DNS IP is the first entry in resolv.conf Jan 16 20:38:35 localhost.localdomain nm-dispatcher[5030]: <13>Jan 16 20:38:35 root: NM local-dns-prepender: Checking if local DNS IP is the first entry in resolv.conf Jan 16 20:38:35 localhost.localdomain root[5034]: NM local-dns-prepender: local DNS IP already is the first entry in resolv.conf Jan 16 20:38:35 localhost.localdomain nm-dispatcher[5034]: <13>Jan 16 20:38:35 root: NM local-dns-prepender: local DNS IP already is the first entry in resolv.conf Jan 16 20:38:35 localhost.localdomain systemd[1]: Started DHCP Service for Provisioning Network. Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: + . /bin/ironic-common.sh Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: ++ set -euxo pipefail Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: ++ IRONIC_IP= Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: ++ PROVISIONING_INTERFACE=ens4 Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: ++ PROVISIONING_IP= Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: ++ PROVISIONING_MACS= Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5067]: +++ get_provisioning_interface Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5067]: +++ [[ -n ens4 ]] Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5067]: +++ echo ens4 Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5067]: +++ return Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: ++ PROVISIONING_INTERFACE=ens4 Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: ++ export PROVISIONING_INTERFACE Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: ++ export LISTEN_ALL_INTERFACES=true Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: ++ LISTEN_ALL_INTERFACES=true Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: ++ export IRONIC_PRIVATE_PORT=6388 Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: ++ IRONIC_PRIVATE_PORT=6388 Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: ++ export IRONIC_INSPECTOR_PRIVATE_PORT=5049 Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: ++ IRONIC_INSPECTOR_PRIVATE_PORT=5049 Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: ++ export IRONIC_ACCESS_PORT=6385 Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: ++ IRONIC_ACCESS_PORT=6385 Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: ++ export IRONIC_LISTEN_PORT=6385 Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: ++ IRONIC_LISTEN_PORT=6385 Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: ++ export IRONIC_INSPECTOR_ACCESS_PORT=5050 Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: ++ IRONIC_INSPECTOR_ACCESS_PORT=5050 Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: ++ export IRONIC_INSPECTOR_LISTEN_PORT=5050 Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: ++ IRONIC_INSPECTOR_LISTEN_PORT=5050 Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: + export HTTP_PORT=6180 Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: + HTTP_PORT=6180 Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: + DNSMASQ_EXCEPT_INTERFACE=lo Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: + export DNS_PORT=0 Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: + DNS_PORT=0 Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: + wait_for_interface_or_ip Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: + [[ -n '' ]] Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: + [[ -n '' ]] Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: + echo 'Waiting for ens4 interface to be configured' Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: Waiting for ens4 interface to be configured Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5070]: ++ awk '{print $3}' Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5072]: ++ head -n 1 Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5069]: ++ ip -br add show scope global up dev ens4 Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5071]: ++ sed -e 's%/.*%%' Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: + IRONIC_IP=172.22.0.2 Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: + export IRONIC_IP Jan 16 20:38:35 localhost.localdomain ironic-dnsmasq[5065]: + sleep 1 Jan 16 20:38:36 localhost.localdomain ironic-dnsmasq[5065]: + [[ -n 172.22.0.2 ]] Jan 16 20:38:36 localhost.localdomain ironic-dnsmasq[5065]: + [[ 172.22.0.2 =~ .*:.* ]] Jan 16 20:38:36 localhost.localdomain ironic-dnsmasq[5065]: + export IPV=4 Jan 16 20:38:36 localhost.localdomain ironic-dnsmasq[5065]: + IPV=4 Jan 16 20:38:36 localhost.localdomain ironic-dnsmasq[5065]: + export IRONIC_URL_HOST=172.22.0.2 Jan 16 20:38:36 localhost.localdomain ironic-dnsmasq[5065]: + IRONIC_URL_HOST=172.22.0.2 Jan 16 20:38:36 localhost.localdomain ironic-dnsmasq[5065]: + [[ '' == \p\r\o\v\i\s\i\o\n\i\n\g ]] Jan 16 20:38:36 localhost.localdomain ironic-dnsmasq[5065]: + mkdir -p /shared/tftpboot Jan 16 20:38:36 localhost.localdomain ironic-dnsmasq[5065]: + mkdir -p /shared/tftpboot/arm64-efi Jan 16 20:38:36 localhost.localdomain ironic-dnsmasq[5065]: + mkdir -p /shared/html/images Jan 16 20:38:36 localhost.localdomain ironic-dnsmasq[5065]: + mkdir -p /shared/html/pxelinux.cfg Jan 16 20:38:36 localhost.localdomain ironic-dnsmasq[5065]: + cp /tftpboot/undionly.kpxe /tftpboot/snponly.efi /shared/tftpboot Jan 16 20:38:36 localhost.localdomain ironic-dnsmasq[5065]: + cp /tftpboot/arm64-efi/snponly.efi /shared/tftpboot/arm64-efi Jan 16 20:38:36 localhost.localdomain ironic-dnsmasq[5065]: + python3 -c 'import os; import sys; import jinja2; sys.stdout.write(jinja2.Template(sys.stdin.read()).render(env=os.environ))' Jan 16 20:38:36 localhost.localdomain ironic-dnsmasq[5083]: ++ echo lo Jan 16 20:38:36 localhost.localdomain ironic-dnsmasq[5084]: ++ tr , ' ' Jan 16 20:38:36 localhost.localdomain ironic-dnsmasq[5065]: + for iface in $(echo "$DNSMASQ_EXCEPT_INTERFACE" | tr ',' ' ') Jan 16 20:38:36 localhost.localdomain ironic-dnsmasq[5065]: + sed -i -e '/^interface=.*/ a\except-interface=lo' /tmp/dnsmasq.conf Jan 16 20:38:36 localhost.localdomain ironic-dnsmasq[5065]: + cat /tmp/dnsmasq.conf Jan 16 20:38:36 localhost.localdomain ironic-dnsmasq[5065]: + rm /tmp/dnsmasq.conf Jan 16 20:38:36 localhost.localdomain ironic-dnsmasq[5065]: + exec /usr/sbin/dnsmasq -d -q -C /etc/dnsmasq.conf Jan 16 20:38:36 localhost.localdomain ironic-dnsmasq[5065]: dnsmasq: started, version 2.85 DNS disabled Jan 16 20:38:36 localhost.localdomain ironic-dnsmasq[5065]: dnsmasq: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Jan 16 20:38:36 localhost.localdomain ironic-dnsmasq[5065]: dnsmasq-dhcp: DHCP, IP range 172.22.0.10 -- 172.22.0.254, lease time 2m Jan 16 20:38:36 localhost.localdomain ironic-dnsmasq[5065]: dnsmasq-dhcp: DHCP, sockets bound exclusively to interface ens4 Jan 16 20:38:36 localhost.localdomain ironic-dnsmasq[5065]: dnsmasq-tftp: TFTP root is /shared/tftpboot Jan 16 20:38:37 localhost.localdomain sudo[5088]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps -a Jan 16 20:38:37 localhost.localdomain sudo[5088]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:38:38 localhost.localdomain podman[4930]: Getting image source signatures Jan 16 20:38:38 localhost.localdomain podman[4930]: Copying blob sha256:03b716135d19fa5f0f07a47ca2b32035e780619f5cc4e8e30fc79ae82d85dbb0 Jan 16 20:38:38 localhost.localdomain podman[4930]: Copying blob sha256:d8190195889efb5333eeec18af9b6c82313edd4db62989bd3a357caca4f13f0e Jan 16 20:38:38 localhost.localdomain podman[4930]: Copying blob sha256:97da74cc6d8fa5d1634eb1760fd1da5c6048619c264c23e62d75f3bf6b8ef5c4 Jan 16 20:38:38 localhost.localdomain podman[4930]: Copying blob sha256:c0497b5720d9b6d3dbdce3c3fe3214686af570ce4440337973cea9ba83ffa7ee Jan 16 20:38:38 localhost.localdomain podman[4930]: Copying blob sha256:cc5ef9451a35fd1c58408dcd20d1fd519541ede98f6d3c854fcb284c00e419f8 Jan 16 20:38:38 localhost.localdomain sudo[5088]: pam_unix(sudo:session): session closed for user root Jan 16 20:38:39 localhost.localdomain systemd[1]: Started libcontainer container 9b54c7d3ec6d3fecc6817c768ea75d9ca79b03111ba72749172b64cbc091a0b8. Jan 16 20:38:41 localhost.localdomain bootkube.sh[4593]: Writing asset: /assets/config-bootstrap/manifests/0000_03_config-operator_01_proxy.crd.yaml Jan 16 20:38:41 localhost.localdomain bootkube.sh[4593]: Writing asset: /assets/config-bootstrap/manifests/0000_10_config-operator_01_image.crd.yaml Jan 16 20:38:41 localhost.localdomain bootkube.sh[4593]: Writing asset: /assets/config-bootstrap/manifests/0000_10_config-operator_01_ingress.crd.yaml Jan 16 20:38:41 localhost.localdomain bootkube.sh[4593]: Writing asset: /assets/config-bootstrap/manifests/0000_10_config-operator_01_network.crd.yaml Jan 16 20:38:41 localhost.localdomain bootkube.sh[4593]: Writing asset: /assets/config-bootstrap/manifests/0000_10_config-operator_01_featuregate.crd.yaml Jan 16 20:38:41 localhost.localdomain bootkube.sh[4593]: Writing asset: /assets/config-bootstrap/manifests/0000_10_config-operator_01_imagecontentpolicy.crd.yaml Jan 16 20:38:41 localhost.localdomain bootkube.sh[4593]: Writing asset: /assets/config-bootstrap/manifests/0000_10_config-operator_01_infrastructure-Default.crd.yaml Jan 16 20:38:41 localhost.localdomain bootkube.sh[4593]: Writing asset: /assets/config-bootstrap/manifests/0000_10_config-operator_01_oauth.crd.yaml Jan 16 20:38:41 localhost.localdomain bootkube.sh[4593]: Writing asset: /assets/config-bootstrap/manifests/0000_10_config-operator_01_imagetagmirrorset.crd.yaml Jan 16 20:38:41 localhost.localdomain bootkube.sh[4593]: Writing asset: /assets/config-bootstrap/manifests/0000_03_authorization-openshift_01_rolebindingrestriction.crd.yaml Jan 16 20:38:41 localhost.localdomain bootkube.sh[4593]: Writing asset: /assets/config-bootstrap/manifests/0000_03_security-openshift_01_scc.crd.yaml Jan 16 20:38:41 localhost.localdomain bootkube.sh[4593]: Writing asset: /assets/config-bootstrap/manifests/0000_03_securityinternal-openshift_02_rangeallocation.crd.yaml Jan 16 20:38:41 localhost.localdomain bootkube.sh[4593]: Writing asset: /assets/config-bootstrap/manifests/0000_10_config-operator_01_dns-Default.crd.yaml Jan 16 20:38:41 localhost.localdomain bootkube.sh[4593]: Writing asset: /assets/config-bootstrap/manifests/0000_10_config-operator_01_imagedigestmirrorset.crd.yaml Jan 16 20:38:41 localhost.localdomain bootkube.sh[4593]: Writing asset: /assets/config-bootstrap/manifests/0000_03_quota-openshift_01_clusterresourcequota.crd.yaml Jan 16 20:38:41 localhost.localdomain bootkube.sh[4593]: Writing asset: /assets/config-bootstrap/manifests/0000_10_config-operator_01_authentication.crd.yaml Jan 16 20:38:41 localhost.localdomain bootkube.sh[4593]: Writing asset: /assets/config-bootstrap/manifests/0000_10_config-operator_01_imagecontentsourcepolicy.crd.yaml Jan 16 20:38:41 localhost.localdomain bootkube.sh[4593]: Writing asset: /assets/config-bootstrap/manifests/0000_10_config-operator_01_scheduler.crd.yaml Jan 16 20:38:41 localhost.localdomain bootkube.sh[4593]: Writing asset: /assets/config-bootstrap/manifests/0000_10_config-operator_01_node.crd.yaml Jan 16 20:38:41 localhost.localdomain bootkube.sh[4593]: Writing asset: /assets/config-bootstrap/manifests/0000_10_config-operator_01_project.crd.yaml Jan 16 20:38:41 localhost.localdomain bootkube.sh[4593]: Writing asset: /assets/config-bootstrap/manifests/0000_10_config-operator_01_apiserver-Default.crd.yaml Jan 16 20:38:41 localhost.localdomain bootkube.sh[4593]: Writing asset: /assets/config-bootstrap/manifests/0000_10_config-operator_01_console.crd.yaml Jan 16 20:38:41 localhost.localdomain systemd[1]: libpod-9b54c7d3ec6d3fecc6817c768ea75d9ca79b03111ba72749172b64cbc091a0b8.scope: Deactivated successfully. Jan 16 20:38:41 localhost.localdomain systemd[1]: libpod-9b54c7d3ec6d3fecc6817c768ea75d9ca79b03111ba72749172b64cbc091a0b8.scope: Consumed 1.606s CPU time. Jan 16 20:38:41 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9b54c7d3ec6d3fecc6817c768ea75d9ca79b03111ba72749172b64cbc091a0b8-userdata-shm.mount: Deactivated successfully. Jan 16 20:38:41 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-950c71e0aee3fa711ba2e17c17e1568fef9745b64107e727b3dd5fc9fb5ed682-merged.mount: Deactivated successfully. Jan 16 20:38:41 localhost.localdomain bootkube.sh[3228]: Rendering Cluster Version Operator Manifests... Jan 16 20:38:42 localhost.localdomain systemd[1]: Started libcontainer container a8b2f3cfc544e7fd24b2fbd4dcdf2fb9974c24ea60b5b593f732b45b526ae81b. Jan 16 20:38:42 localhost.localdomain systemd[1]: libpod-a8b2f3cfc544e7fd24b2fbd4dcdf2fb9974c24ea60b5b593f732b45b526ae81b.scope: Deactivated successfully. Jan 16 20:38:42 localhost.localdomain systemd[1]: run-runc-a8b2f3cfc544e7fd24b2fbd4dcdf2fb9974c24ea60b5b593f732b45b526ae81b-runc.VlLjgl.mount: Deactivated successfully. Jan 16 20:38:42 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a8b2f3cfc544e7fd24b2fbd4dcdf2fb9974c24ea60b5b593f732b45b526ae81b-userdata-shm.mount: Deactivated successfully. Jan 16 20:38:42 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-b001d3b244d7c4b0b2f103ad153e96141130de7f54710c16c842e0846b6de978-merged.mount: Deactivated successfully. Jan 16 20:38:42 localhost.localdomain bootkube.sh[3228]: Rendering CEO Manifests... Jan 16 20:38:44 localhost.localdomain kubelet.sh[2579]: I0116 20:38:44.664651 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:38:44 localhost.localdomain kubelet.sh[2579]: I0116 20:38:44.679665 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:38:44 localhost.localdomain kubelet.sh[2579]: I0116 20:38:44.679779 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:38:44 localhost.localdomain kubelet.sh[2579]: I0116 20:38:44.679871 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:38:45 localhost.localdomain systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Jan 16 20:38:47 localhost.localdomain approve-csr.sh[5290]: E0116 20:38:47.391640 5290 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:38:47 localhost.localdomain approve-csr.sh[5290]: E0116 20:38:47.395740 5290 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:38:47 localhost.localdomain approve-csr.sh[5290]: E0116 20:38:47.406501 5290 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:38:47 localhost.localdomain approve-csr.sh[5290]: E0116 20:38:47.412857 5290 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:38:47 localhost.localdomain approve-csr.sh[5290]: E0116 20:38:47.417269 5290 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:38:47 localhost.localdomain approve-csr.sh[5290]: The connection to the server localhost:6443 was refused - did you specify the right host or port? Jan 16 20:38:54 localhost.localdomain kubelet.sh[2579]: I0116 20:38:54.772081 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:38:54 localhost.localdomain kubelet.sh[2579]: I0116 20:38:54.775571 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:38:54 localhost.localdomain kubelet.sh[2579]: I0116 20:38:54.775684 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:38:54 localhost.localdomain kubelet.sh[2579]: I0116 20:38:54.775716 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:38:55 localhost.localdomain systemd[1]: Started libcontainer container 57ecab45f01d6fb36c66e88e6f8d3eca31925c94ff97a14e5225b6ba0b8e451b. Jan 16 20:38:55 localhost.localdomain systemd[1]: run-runc-57ecab45f01d6fb36c66e88e6f8d3eca31925c94ff97a14e5225b6ba0b8e451b-runc.197EFU.mount: Deactivated successfully. Jan 16 20:38:55 localhost.localdomain bootkube.sh[5262]: I0116 20:38:55.920468 1 bootstrap_ip_linux.go:35] retrieved Address map map[0xc0005b00e0:[10.88.0.1/16 cni-podman0 fe80::c012:28ff:fe19:8094/64] 0xc00096e8f0:[127.0.0.1/8 lo ::1/128] 0xc00096e9c0:[10.0.0.70/24 ens3 fe80::5c1d:fdf0:57d1:c5b6/64] 0xc00096ea90:[172.22.0.2/24 ens4 fe80::bac3:a798:5dbf:cb58/64]] Jan 16 20:38:55 localhost.localdomain bootkube.sh[5262]: I0116 20:38:55.921257 1 bootstrap_ip_linux.go:54] Ignoring route non Router advertisement route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254} Jan 16 20:38:55 localhost.localdomain bootkube.sh[5262]: I0116 20:38:55.921301 1 bootstrap_ip_linux.go:54] Ignoring route non Router advertisement route {Ifindex: 4 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254} Jan 16 20:38:55 localhost.localdomain bootkube.sh[5262]: I0116 20:38:55.921316 1 bootstrap_ip_linux.go:54] Ignoring route non Router advertisement route {Ifindex: 2 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254} Jan 16 20:38:55 localhost.localdomain bootkube.sh[5262]: I0116 20:38:55.921329 1 bootstrap_ip_linux.go:54] Ignoring route non Router advertisement route {Ifindex: 3 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254} Jan 16 20:38:55 localhost.localdomain bootkube.sh[5262]: I0116 20:38:55.921343 1 bootstrap_ip_linux.go:64] Retrieved route map map[] Jan 16 20:38:55 localhost.localdomain bootkube.sh[5262]: I0116 20:38:55.921450 1 bootstrap_ip.go:158] Filtered address 127.0.0.1/8 lo Jan 16 20:38:55 localhost.localdomain bootkube.sh[5262]: I0116 20:38:55.921471 1 bootstrap_ip.go:158] Filtered address ::1/128 Jan 16 20:38:55 localhost.localdomain bootkube.sh[5262]: I0116 20:38:55.921491 1 bootstrap_ip.go:187] Checking whether address 10.0.0.70/24 ens3 contains VIP 10.0.0.70 Jan 16 20:38:55 localhost.localdomain bootkube.sh[5262]: I0116 20:38:55.921508 1 bootstrap_ip.go:189] Address 10.0.0.70/24 ens3 contains VIP 10.0.0.70 Jan 16 20:38:55 localhost.localdomain bootkube.sh[5262]: I0116 20:38:55.921524 1 bootstrap_ip.go:187] Checking whether address 10.0.0.70/24 ens3 contains VIP 172.22.0.2 Jan 16 20:38:55 localhost.localdomain bootkube.sh[5262]: I0116 20:38:55.921540 1 bootstrap_ip.go:187] Checking whether address 10.0.0.70/24 ens3 contains VIP 10.88.0.1 Jan 16 20:38:55 localhost.localdomain bootkube.sh[5262]: I0116 20:38:55.921552 1 bootstrap_ip.go:158] Filtered address fe80::5c1d:fdf0:57d1:c5b6/64 Jan 16 20:38:55 localhost.localdomain bootkube.sh[5262]: I0116 20:38:55.921563 1 bootstrap_ip.go:158] Filtered address 172.22.0.2/24 ens4 Jan 16 20:38:55 localhost.localdomain bootkube.sh[5262]: I0116 20:38:55.921576 1 bootstrap_ip.go:158] Filtered address fe80::bac3:a798:5dbf:cb58/64 Jan 16 20:38:55 localhost.localdomain bootkube.sh[5262]: I0116 20:38:55.921587 1 bootstrap_ip.go:158] Filtered address 10.88.0.1/16 cni-podman0 Jan 16 20:38:55 localhost.localdomain bootkube.sh[5262]: I0116 20:38:55.921598 1 bootstrap_ip.go:158] Filtered address fe80::c012:28ff:fe19:8094/64 Jan 16 20:38:55 localhost.localdomain bootkube.sh[5262]: I0116 20:38:55.921608 1 bootstrap_ip.go:200] Found routable IPs [10.0.0.70] Jan 16 20:38:55 localhost.localdomain bootkube.sh[5262]: I0116 20:38:55.921619 1 render.go:414] using bootstrap IP 10.0.0.70 Jan 16 20:38:55 localhost.localdomain bootkube.sh[5262]: I0116 20:38:55.921724 1 render.go:604] Bootstrapping etcd using: "HAScalingStrategy" Jan 16 20:38:55 localhost.localdomain bootkube.sh[5262]: WARNING: Validity period of the certificate for "etcd-signer" is greater than 5 years! Jan 16 20:38:55 localhost.localdomain bootkube.sh[5262]: WARNING: By security reasons it is strongly recommended to change this period and make it smaller! Jan 16 20:38:56 localhost.localdomain sudo[5343]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps -a Jan 16 20:38:56 localhost.localdomain sudo[5343]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:38:56 localhost.localdomain sudo[5343]: pam_unix(sudo:session): session closed for user root Jan 16 20:38:56 localhost.localdomain bootkube.sh[5262]: WARNING: Validity period of the certificate for "etcd-metric-signer" is greater than 5 years! Jan 16 20:38:56 localhost.localdomain bootkube.sh[5262]: WARNING: By security reasons it is strongly recommended to change this period and make it smaller! Jan 16 20:38:58 localhost.localdomain bootkube.sh[5262]: Writing asset: /assets/etcd-bootstrap/manifests/etcd-signer-secret.yaml Jan 16 20:38:58 localhost.localdomain bootkube.sh[5262]: Writing asset: /assets/etcd-bootstrap/manifests/00_etcd-endpoints-cm.yaml Jan 16 20:38:58 localhost.localdomain bootkube.sh[5262]: Writing asset: /assets/etcd-bootstrap/manifests/etcd-serving-ca-configmap.yaml Jan 16 20:38:58 localhost.localdomain bootkube.sh[5262]: Writing asset: /assets/etcd-bootstrap/manifests/etcd-client-secret.yaml Jan 16 20:38:58 localhost.localdomain bootkube.sh[5262]: Writing asset: /assets/etcd-bootstrap/manifests/etcd-metric-client-secret.yaml Jan 16 20:38:58 localhost.localdomain bootkube.sh[5262]: Writing asset: /assets/etcd-bootstrap/manifests/etcd-metric-serving-ca-configmap.yaml Jan 16 20:38:58 localhost.localdomain bootkube.sh[5262]: Writing asset: /assets/etcd-bootstrap/manifests/etcd-metric-signer-secret.yaml Jan 16 20:38:58 localhost.localdomain bootkube.sh[5262]: Writing asset: /assets/etcd-bootstrap/manifests/openshift-etcd-svc.yaml Jan 16 20:38:58 localhost.localdomain bootkube.sh[5262]: Writing asset: /assets/etcd-bootstrap/manifests/00_openshift-etcd-ns.yaml Jan 16 20:38:58 localhost.localdomain bootkube.sh[5262]: Writing asset: /assets/etcd-bootstrap/manifests/etcd-ca-bundle-configmap.yaml Jan 16 20:38:58 localhost.localdomain bootkube.sh[5262]: Writing asset: /assets/etcd-bootstrap/etc-kubernetes/manifests/etcd-member-pod.yaml Jan 16 20:38:58 localhost.localdomain systemd[1]: libpod-57ecab45f01d6fb36c66e88e6f8d3eca31925c94ff97a14e5225b6ba0b8e451b.scope: Deactivated successfully. Jan 16 20:38:58 localhost.localdomain systemd[1]: libpod-57ecab45f01d6fb36c66e88e6f8d3eca31925c94ff97a14e5225b6ba0b8e451b.scope: Consumed 2.569s CPU time. Jan 16 20:38:58 localhost.localdomain sudo[5363]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps -a Jan 16 20:38:58 localhost.localdomain sudo[5363]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:38:58 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-57ecab45f01d6fb36c66e88e6f8d3eca31925c94ff97a14e5225b6ba0b8e451b-userdata-shm.mount: Deactivated successfully. Jan 16 20:38:58 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-5890e1c6d8d25821819ab6a8c400205161a27a04ea1c418313b59f0d417a1a84-merged.mount: Deactivated successfully. Jan 16 20:38:58 localhost.localdomain kubelet.sh[2579]: I0116 20:38:58.718648 2579 kubelet.go:2425] "SyncLoop ADD" source="file" pods=[openshift-etcd/etcd-bootstrap-member-localhost.localdomain] Jan 16 20:38:58 localhost.localdomain kubelet.sh[2579]: I0116 20:38:58.719250 2579 topology_manager.go:212] "Topology Admit Handler" podUID=d2aec066a4d1ca73a8d9ec42dd9c12ab podNamespace="openshift-etcd" podName="etcd-bootstrap-member-localhost.localdomain" Jan 16 20:38:58 localhost.localdomain kubelet.sh[2579]: I0116 20:38:58.719495 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:38:58 localhost.localdomain kubelet.sh[2579]: I0116 20:38:58.729300 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:38:58 localhost.localdomain kubelet.sh[2579]: I0116 20:38:58.729510 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:38:58 localhost.localdomain kubelet.sh[2579]: I0116 20:38:58.729561 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:38:58 localhost.localdomain systemd[1]: Created slice libcontainer container kubepods-burstable-podd2aec066a4d1ca73a8d9ec42dd9c12ab.slice. Jan 16 20:38:58 localhost.localdomain sudo[5363]: pam_unix(sudo:session): session closed for user root Jan 16 20:38:58 localhost.localdomain kubelet.sh[2579]: I0116 20:38:58.808856 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:38:58 localhost.localdomain kubelet.sh[2579]: I0116 20:38:58.814901 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:38:58 localhost.localdomain kubelet.sh[2579]: I0116 20:38:58.815216 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:38:58 localhost.localdomain kubelet.sh[2579]: I0116 20:38:58.815266 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:38:58 localhost.localdomain kubelet.sh[2579]: I0116 20:38:58.829438 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d2aec066a4d1ca73a8d9ec42dd9c12ab-certs\") pod \"etcd-bootstrap-member-localhost.localdomain\" (UID: \"d2aec066a4d1ca73a8d9ec42dd9c12ab\") " pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" Jan 16 20:38:58 localhost.localdomain kubelet.sh[2579]: I0116 20:38:58.829621 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d2aec066a4d1ca73a8d9ec42dd9c12ab-data-dir\") pod \"etcd-bootstrap-member-localhost.localdomain\" (UID: \"d2aec066a4d1ca73a8d9ec42dd9c12ab\") " pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" Jan 16 20:38:58 localhost.localdomain kubelet.sh[2579]: I0116 20:38:58.930530 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d2aec066a4d1ca73a8d9ec42dd9c12ab-certs\") pod \"etcd-bootstrap-member-localhost.localdomain\" (UID: \"d2aec066a4d1ca73a8d9ec42dd9c12ab\") " pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" Jan 16 20:38:58 localhost.localdomain kubelet.sh[2579]: I0116 20:38:58.930700 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d2aec066a4d1ca73a8d9ec42dd9c12ab-data-dir\") pod \"etcd-bootstrap-member-localhost.localdomain\" (UID: \"d2aec066a4d1ca73a8d9ec42dd9c12ab\") " pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" Jan 16 20:38:58 localhost.localdomain kubelet.sh[2579]: I0116 20:38:58.930792 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d2aec066a4d1ca73a8d9ec42dd9c12ab-certs\") pod \"etcd-bootstrap-member-localhost.localdomain\" (UID: \"d2aec066a4d1ca73a8d9ec42dd9c12ab\") " pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" Jan 16 20:38:58 localhost.localdomain kubelet.sh[2579]: I0116 20:38:58.931235 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d2aec066a4d1ca73a8d9ec42dd9c12ab-data-dir\") pod \"etcd-bootstrap-member-localhost.localdomain\" (UID: \"d2aec066a4d1ca73a8d9ec42dd9c12ab\") " pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" Jan 16 20:38:59 localhost.localdomain bootkube.sh[3228]: Rendering Kubernetes API server core manifests... Jan 16 20:38:59 localhost.localdomain kubelet.sh[2579]: I0116 20:38:59.116364 2579 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" Jan 16 20:38:59 localhost.localdomain crio[2304]: time="2024-01-16 20:38:59.119649971Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-bootstrap-member-localhost.localdomain/POD" id=230d76c2-5beb-4387-83f8-794b260fe74d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 20:38:59 localhost.localdomain crio[2304]: time="2024-01-16 20:38:59.120257794Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:39:01 localhost.localdomain sudo[5421]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps -a Jan 16 20:39:01 localhost.localdomain sudo[5421]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:39:02 localhost.localdomain sudo[5421]: pam_unix(sudo:session): session closed for user root Jan 16 20:39:02 localhost.localdomain kubelet.sh[2579]: W0116 20:39:02.865667 2579 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2aec066a4d1ca73a8d9ec42dd9c12ab.slice/crio-cff8c870f70387983fc3a7565b3b0d89d90c33a197af0b4e33bc8bc95d7c0757 WatchSource:0}: Error finding container cff8c870f70387983fc3a7565b3b0d89d90c33a197af0b4e33bc8bc95d7c0757: Status 404 returned error can't find the container with id cff8c870f70387983fc3a7565b3b0d89d90c33a197af0b4e33bc8bc95d7c0757 Jan 16 20:39:02 localhost.localdomain crio[2304]: time="2024-01-16 20:39:02.870782308Z" level=info msg="Ran pod sandbox cff8c870f70387983fc3a7565b3b0d89d90c33a197af0b4e33bc8bc95d7c0757 with infra container: openshift-etcd/etcd-bootstrap-member-localhost.localdomain/POD" id=230d76c2-5beb-4387-83f8-794b260fe74d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 20:39:02 localhost.localdomain crio[2304]: time="2024-01-16 20:39:02.889692161Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c08492f23eacf58b8ad7a028b9b648a6ae7bd2a4d24d8648d2ee4d0e3fc4d75" id=c814690a-da4e-426f-800b-6e1714b0df54 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:39:02 localhost.localdomain crio[2304]: time="2024-01-16 20:39:02.893854268Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c08492f23eacf58b8ad7a028b9b648a6ae7bd2a4d24d8648d2ee4d0e3fc4d75 not found" id=c814690a-da4e-426f-800b-6e1714b0df54 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:39:02 localhost.localdomain kubelet.sh[2579]: I0116 20:39:02.895517 2579 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 16 20:39:02 localhost.localdomain kubelet.sh[2579]: I0116 20:39:02.895761 2579 provider.go:82] Docker config file not found: couldn't find valid .dockercfg after checking in [/var/lib/kubelet /] Jan 16 20:39:02 localhost.localdomain crio[2304]: time="2024-01-16 20:39:02.900592734Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c08492f23eacf58b8ad7a028b9b648a6ae7bd2a4d24d8648d2ee4d0e3fc4d75" id=30989547-9b4c-4f0e-9012-d249296a2238 name=/runtime.v1.ImageService/PullImage Jan 16 20:39:02 localhost.localdomain kubelet.sh[2579]: I0116 20:39:02.932544 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" event=&{ID:d2aec066a4d1ca73a8d9ec42dd9c12ab Type:ContainerStarted Data:cff8c870f70387983fc3a7565b3b0d89d90c33a197af0b4e33bc8bc95d7c0757} Jan 16 20:39:02 localhost.localdomain crio[2304]: time="2024-01-16 20:39:02.949636134Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c08492f23eacf58b8ad7a028b9b648a6ae7bd2a4d24d8648d2ee4d0e3fc4d75\"" Jan 16 20:39:03 localhost.localdomain sudo[5442]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps Jan 16 20:39:03 localhost.localdomain sudo[5442]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:39:04 localhost.localdomain sudo[5442]: pam_unix(sudo:session): session closed for user root Jan 16 20:39:04 localhost.localdomain kubelet.sh[2579]: I0116 20:39:04.826793 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:39:04 localhost.localdomain kubelet.sh[2579]: I0116 20:39:04.831738 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:39:04 localhost.localdomain kubelet.sh[2579]: I0116 20:39:04.832359 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:39:04 localhost.localdomain kubelet.sh[2579]: I0116 20:39:04.832594 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:39:05 localhost.localdomain crio[2304]: time="2024-01-16 20:39:05.217324209Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c08492f23eacf58b8ad7a028b9b648a6ae7bd2a4d24d8648d2ee4d0e3fc4d75\"" Jan 16 20:39:06 localhost.localdomain sudo[5453]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps Jan 16 20:39:06 localhost.localdomain sudo[5453]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:39:06 localhost.localdomain sudo[5453]: pam_unix(sudo:session): session closed for user root Jan 16 20:39:07 localhost.localdomain approve-csr.sh[5464]: E0116 20:39:07.919338 5464 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:39:07 localhost.localdomain approve-csr.sh[5464]: E0116 20:39:07.924484 5464 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:39:07 localhost.localdomain approve-csr.sh[5464]: E0116 20:39:07.931676 5464 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:39:07 localhost.localdomain approve-csr.sh[5464]: E0116 20:39:07.938756 5464 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:39:07 localhost.localdomain approve-csr.sh[5464]: E0116 20:39:07.942172 5464 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:39:07 localhost.localdomain approve-csr.sh[5464]: The connection to the server localhost:6443 was refused - did you specify the right host or port? Jan 16 20:39:08 localhost.localdomain sudo[5476]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps Jan 16 20:39:08 localhost.localdomain sudo[5476]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:39:08 localhost.localdomain sudo[5476]: pam_unix(sudo:session): session closed for user root Jan 16 20:39:11 localhost.localdomain systemd[1948]: Started podman-5496.scope. Jan 16 20:39:13 localhost.localdomain sudo[5505]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps Jan 16 20:39:13 localhost.localdomain sudo[5505]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:39:13 localhost.localdomain sudo[5505]: pam_unix(sudo:session): session closed for user root Jan 16 20:39:14 localhost.localdomain kubelet.sh[2579]: I0116 20:39:14.964306 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:39:14 localhost.localdomain kubelet.sh[2579]: I0116 20:39:14.968500 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:39:14 localhost.localdomain kubelet.sh[2579]: I0116 20:39:14.968549 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:39:14 localhost.localdomain kubelet.sh[2579]: I0116 20:39:14.968574 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:39:16 localhost.localdomain sudo[5525]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps -a Jan 16 20:39:16 localhost.localdomain sudo[5525]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:39:19 localhost.localdomain sudo[5525]: pam_unix(sudo:session): session closed for user root Jan 16 20:39:19 localhost.localdomain crio[2304]: time="2024-01-16 20:39:19.696106009Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c08492f23eacf58b8ad7a028b9b648a6ae7bd2a4d24d8648d2ee4d0e3fc4d75" id=30989547-9b4c-4f0e-9012-d249296a2238 name=/runtime.v1.ImageService/PullImage Jan 16 20:39:19 localhost.localdomain crio[2304]: time="2024-01-16 20:39:19.699177303Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c08492f23eacf58b8ad7a028b9b648a6ae7bd2a4d24d8648d2ee4d0e3fc4d75" id=ea196fe8-ce58-487f-9fbf-17e9c2ee2ccb name=/runtime.v1.ImageService/ImageStatus Jan 16 20:39:19 localhost.localdomain crio[2304]: time="2024-01-16 20:39:19.702214089Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:4744468d358574597e15c259edf1e3d02347e1ce482b841f37e67bde9080dc2c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c08492f23eacf58b8ad7a028b9b648a6ae7bd2a4d24d8648d2ee4d0e3fc4d75],Size_:476276191,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=ea196fe8-ce58-487f-9fbf-17e9c2ee2ccb name=/runtime.v1.ImageService/ImageStatus Jan 16 20:39:19 localhost.localdomain crio[2304]: time="2024-01-16 20:39:19.714075657Z" level=info msg="Creating container: openshift-etcd/etcd-bootstrap-member-localhost.localdomain/etcdctl" id=20acb6f8-884a-494f-83b9-2f54e6d83aa9 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:39:19 localhost.localdomain crio[2304]: time="2024-01-16 20:39:19.714608694Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:39:20 localhost.localdomain systemd[1]: run-runc-4e3f48ab1b656100caa5563c368d5461d0868b59e81c027e360fb77a59643b69-runc.K4Aeo8.mount: Deactivated successfully. Jan 16 20:39:20 localhost.localdomain systemd[1]: Started libcontainer container 4e3f48ab1b656100caa5563c368d5461d0868b59e81c027e360fb77a59643b69. Jan 16 20:39:20 localhost.localdomain systemd[1]: Started crio-conmon-b18934263a10e1ade9bacd96d05767a925b6442974e6e7b8c1652c9167cbc97f.scope. Jan 16 20:39:21 localhost.localdomain systemd[1]: run-runc-b18934263a10e1ade9bacd96d05767a925b6442974e6e7b8c1652c9167cbc97f-runc.WTMGXL.mount: Deactivated successfully. Jan 16 20:39:21 localhost.localdomain systemd[1]: Started libcontainer container b18934263a10e1ade9bacd96d05767a925b6442974e6e7b8c1652c9167cbc97f. Jan 16 20:39:21 localhost.localdomain crio[2304]: time="2024-01-16 20:39:21.310234748Z" level=info msg="Created container b18934263a10e1ade9bacd96d05767a925b6442974e6e7b8c1652c9167cbc97f: openshift-etcd/etcd-bootstrap-member-localhost.localdomain/etcdctl" id=20acb6f8-884a-494f-83b9-2f54e6d83aa9 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:39:21 localhost.localdomain crio[2304]: time="2024-01-16 20:39:21.313314646Z" level=info msg="Starting container: b18934263a10e1ade9bacd96d05767a925b6442974e6e7b8c1652c9167cbc97f" id=3fb8cb36-323e-4dfe-8a11-0a37b18ebb6f name=/runtime.v1.RuntimeService/StartContainer Jan 16 20:39:21 localhost.localdomain crio[2304]: time="2024-01-16 20:39:21.379428934Z" level=info msg="Started container" PID=5589 containerID=b18934263a10e1ade9bacd96d05767a925b6442974e6e7b8c1652c9167cbc97f description=openshift-etcd/etcd-bootstrap-member-localhost.localdomain/etcdctl id=3fb8cb36-323e-4dfe-8a11-0a37b18ebb6f name=/runtime.v1.RuntimeService/StartContainer sandboxID=cff8c870f70387983fc3a7565b3b0d89d90c33a197af0b4e33bc8bc95d7c0757 Jan 16 20:39:21 localhost.localdomain crio[2304]: time="2024-01-16 20:39:21.459333822Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c08492f23eacf58b8ad7a028b9b648a6ae7bd2a4d24d8648d2ee4d0e3fc4d75" id=6abba3f5-8d49-4e6a-b88b-c754475802f5 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:39:21 localhost.localdomain crio[2304]: time="2024-01-16 20:39:21.463477062Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:4744468d358574597e15c259edf1e3d02347e1ce482b841f37e67bde9080dc2c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c08492f23eacf58b8ad7a028b9b648a6ae7bd2a4d24d8648d2ee4d0e3fc4d75],Size_:476276191,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=6abba3f5-8d49-4e6a-b88b-c754475802f5 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:39:21 localhost.localdomain crio[2304]: time="2024-01-16 20:39:21.466449454Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c08492f23eacf58b8ad7a028b9b648a6ae7bd2a4d24d8648d2ee4d0e3fc4d75" id=ec8c1703-71d7-4746-8466-4fca2e05d791 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:39:21 localhost.localdomain crio[2304]: time="2024-01-16 20:39:21.469421159Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:4744468d358574597e15c259edf1e3d02347e1ce482b841f37e67bde9080dc2c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c08492f23eacf58b8ad7a028b9b648a6ae7bd2a4d24d8648d2ee4d0e3fc4d75],Size_:476276191,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=ec8c1703-71d7-4746-8466-4fca2e05d791 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:39:21 localhost.localdomain crio[2304]: time="2024-01-16 20:39:21.472645078Z" level=info msg="Creating container: openshift-etcd/etcd-bootstrap-member-localhost.localdomain/etcd" id=8311270a-3f72-4712-89ba-a291f1b3b7fa name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:39:21 localhost.localdomain crio[2304]: time="2024-01-16 20:39:21.474237666Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:39:22 localhost.localdomain systemd[1]: Started crio-conmon-204c56104733ff2eda022439e8ce5788302cd791cf617dc9e9e12410e9fdb46b.scope. Jan 16 20:39:22 localhost.localdomain kubelet.sh[2579]: I0116 20:39:22.055577 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" event=&{ID:d2aec066a4d1ca73a8d9ec42dd9c12ab Type:ContainerStarted Data:b18934263a10e1ade9bacd96d05767a925b6442974e6e7b8c1652c9167cbc97f} Jan 16 20:39:22 localhost.localdomain systemd[1]: Started libcontainer container 204c56104733ff2eda022439e8ce5788302cd791cf617dc9e9e12410e9fdb46b. Jan 16 20:39:22 localhost.localdomain crio[2304]: time="2024-01-16 20:39:22.224749811Z" level=info msg="Created container 204c56104733ff2eda022439e8ce5788302cd791cf617dc9e9e12410e9fdb46b: openshift-etcd/etcd-bootstrap-member-localhost.localdomain/etcd" id=8311270a-3f72-4712-89ba-a291f1b3b7fa name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:39:22 localhost.localdomain crio[2304]: time="2024-01-16 20:39:22.228379744Z" level=info msg="Starting container: 204c56104733ff2eda022439e8ce5788302cd791cf617dc9e9e12410e9fdb46b" id=61300400-30c6-4996-bc48-72bb968b4645 name=/runtime.v1.RuntimeService/StartContainer Jan 16 20:39:22 localhost.localdomain crio[2304]: time="2024-01-16 20:39:22.326491735Z" level=info msg="Started container" PID=5629 containerID=204c56104733ff2eda022439e8ce5788302cd791cf617dc9e9e12410e9fdb46b description=openshift-etcd/etcd-bootstrap-member-localhost.localdomain/etcd id=61300400-30c6-4996-bc48-72bb968b4645 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cff8c870f70387983fc3a7565b3b0d89d90c33a197af0b4e33bc8bc95d7c0757 Jan 16 20:39:22 localhost.localdomain sudo[5659]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps Jan 16 20:39:22 localhost.localdomain sudo[5659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:39:22 localhost.localdomain sudo[5659]: pam_unix(sudo:session): session closed for user root Jan 16 20:39:23 localhost.localdomain kubelet.sh[2579]: I0116 20:39:23.071121 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" event=&{ID:d2aec066a4d1ca73a8d9ec42dd9c12ab Type:ContainerStarted Data:204c56104733ff2eda022439e8ce5788302cd791cf617dc9e9e12410e9fdb46b} Jan 16 20:39:23 localhost.localdomain kubelet.sh[2579]: I0116 20:39:23.072726 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:39:23 localhost.localdomain kubelet.sh[2579]: I0116 20:39:23.082229 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:39:23 localhost.localdomain kubelet.sh[2579]: I0116 20:39:23.082452 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:39:23 localhost.localdomain kubelet.sh[2579]: I0116 20:39:23.082562 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:39:24 localhost.localdomain kubelet.sh[2579]: I0116 20:39:24.076739 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:39:24 localhost.localdomain kubelet.sh[2579]: I0116 20:39:24.082351 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:39:24 localhost.localdomain kubelet.sh[2579]: I0116 20:39:24.082461 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:39:24 localhost.localdomain kubelet.sh[2579]: I0116 20:39:24.082494 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:39:25 localhost.localdomain kubelet.sh[2579]: I0116 20:39:25.057287 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:39:25 localhost.localdomain kubelet.sh[2579]: I0116 20:39:25.061563 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:39:25 localhost.localdomain kubelet.sh[2579]: I0116 20:39:25.061850 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:39:25 localhost.localdomain kubelet.sh[2579]: I0116 20:39:25.062346 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:39:28 localhost.localdomain approve-csr.sh[5676]: E0116 20:39:28.427223 5676 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:39:28 localhost.localdomain approve-csr.sh[5676]: E0116 20:39:28.430048 5676 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:39:28 localhost.localdomain approve-csr.sh[5676]: E0116 20:39:28.432390 5676 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:39:28 localhost.localdomain approve-csr.sh[5676]: E0116 20:39:28.438478 5676 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:39:28 localhost.localdomain approve-csr.sh[5676]: E0116 20:39:28.439497 5676 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:39:28 localhost.localdomain approve-csr.sh[5676]: The connection to the server localhost:6443 was refused - did you specify the right host or port? Jan 16 20:39:35 localhost.localdomain kubelet.sh[2579]: I0116 20:39:35.139082 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:39:35 localhost.localdomain kubelet.sh[2579]: I0116 20:39:35.142385 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:39:35 localhost.localdomain kubelet.sh[2579]: I0116 20:39:35.142500 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:39:35 localhost.localdomain kubelet.sh[2579]: I0116 20:39:35.142539 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/bootstrap-manifests/kube-apiserver-pod.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/0000_20_kube-apiserver-operator_00_cr-scc-hostaccess.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/0000_20_kube-apiserver-operator_00_cr-scc-nonroot-v2.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/0000_20_kube-apiserver-operator_00_scc-anyuid.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/0000_20_kube-apiserver-operator_00_scc-hostaccess.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/00_openshift-kube-apiserver-operator-ns.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/0000_20_kube-apiserver-operator_00_cr-scc-privileged.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/0000_20_kube-apiserver-operator_00_cr-scc-hostmount-anyuid.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/0000_20_kube-apiserver-operator_00_cr-scc-hostnetwork.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/0000_20_kube-apiserver-operator_00_crb-systemauthenticated-scc-restricted-v2.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/cluster-role-binding-kube-apiserver.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/configmap-admin-kubeconfig-client-ca.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/configmap-kubelet-bootstrap-kubeconfig-ca.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/0000_20_kube-apiserver-operator_00_cr-scc-restricted.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/0000_20_kube-apiserver-operator_00_scc-hostmount-anyuid.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/0000_20_kube-apiserver-operator_00_scc-nonroot-v2.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/0000_20_kube-apiserver-operator_00_scc-privileged.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/configmap-sa-token-signing-certs.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/secret-control-plane-client-signer.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/secret-localhost-serving-signer.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/0000_20_kube-apiserver-operator_00_cr-scc-restricted-v2.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/0000_20_kube-apiserver-operator_00_scc-hostnetwork-v2.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/0000_20_kube-apiserver-operator_00_scc-restricted-v2.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/0000_20_kube-apiserver-operator_00_scc-restricted.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/configmap-csr-controller-ca.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/secret-loadbalancer-serving-signer.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/0000_20_kube-apiserver-operator_00_cr-scc-nonroot.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/00_openshift-kube-apiserver-ns.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/apiserver.openshift.io_apirequestcount.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/secret-aggregator-client-signer.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/0000_20_kube-apiserver-operator_00_cr-scc-hostnetwork-v2.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/0000_20_kube-apiserver-operator_00_scc-nonroot.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/cluster-role-kube-apiserver.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/secret-bound-sa-token-signing-key.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/secret-kube-apiserver-to-kubelet-signer.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/secret-service-network-serving-signer.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/0000_20_kube-apiserver-operator_00_cr-scc-anyuid.yaml Jan 16 20:39:38 localhost.localdomain bootkube.sh[5400]: Writing asset: /assets/kube-apiserver-bootstrap/manifests/0000_20_kube-apiserver-operator_00_scc-hostnetwork.yaml Jan 16 20:39:38 localhost.localdomain systemd[1]: libpod-4e3f48ab1b656100caa5563c368d5461d0868b59e81c027e360fb77a59643b69.scope: Deactivated successfully. Jan 16 20:39:38 localhost.localdomain systemd[1]: libpod-4e3f48ab1b656100caa5563c368d5461d0868b59e81c027e360fb77a59643b69.scope: Consumed 17.023s CPU time. Jan 16 20:39:39 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4e3f48ab1b656100caa5563c368d5461d0868b59e81c027e360fb77a59643b69-userdata-shm.mount: Deactivated successfully. Jan 16 20:39:39 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-104f2e1c0ef7a6831b012ebc45f8a373194e8abb9561cfa020b5fe88b4a369fa-merged.mount: Deactivated successfully. Jan 16 20:39:40 localhost.localdomain bootkube.sh[3228]: Rendering Kubernetes Controller Manager core manifests... Jan 16 20:39:43 localhost.localdomain kubelet.sh[2579]: I0116 20:39:43.725757 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 20:39:45 localhost.localdomain kubelet.sh[2579]: I0116 20:39:45.209177 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:39:45 localhost.localdomain kubelet.sh[2579]: I0116 20:39:45.212009 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:39:45 localhost.localdomain kubelet.sh[2579]: I0116 20:39:45.212146 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:39:45 localhost.localdomain kubelet.sh[2579]: I0116 20:39:45.212175 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:39:48 localhost.localdomain approve-csr.sh[5746]: E0116 20:39:48.898687 5746 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:39:48 localhost.localdomain approve-csr.sh[5746]: E0116 20:39:48.901242 5746 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:39:48 localhost.localdomain approve-csr.sh[5746]: E0116 20:39:48.902899 5746 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:39:48 localhost.localdomain approve-csr.sh[5746]: E0116 20:39:48.905389 5746 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:39:48 localhost.localdomain approve-csr.sh[5746]: E0116 20:39:48.907034 5746 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:39:48 localhost.localdomain approve-csr.sh[5746]: The connection to the server localhost:6443 was refused - did you specify the right host or port? Jan 16 20:39:51 localhost.localdomain systemd[1]: run-runc-ded78450e7c3ec478fb657a95bf0493759cd2362d8920e8ae4b1c1418c7f2df1-runc.DR3nqI.mount: Deactivated successfully. Jan 16 20:39:52 localhost.localdomain systemd[1]: Started libcontainer container ded78450e7c3ec478fb657a95bf0493759cd2362d8920e8ae4b1c1418c7f2df1. Jan 16 20:39:55 localhost.localdomain kubelet.sh[2579]: I0116 20:39:55.253878 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:39:55 localhost.localdomain kubelet.sh[2579]: I0116 20:39:55.261597 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:39:55 localhost.localdomain kubelet.sh[2579]: I0116 20:39:55.261905 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:39:55 localhost.localdomain kubelet.sh[2579]: I0116 20:39:55.262429 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:40:05 localhost.localdomain kubelet.sh[2579]: I0116 20:40:05.314511 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:40:05 localhost.localdomain kubelet.sh[2579]: I0116 20:40:05.321220 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:40:05 localhost.localdomain kubelet.sh[2579]: I0116 20:40:05.321441 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:40:05 localhost.localdomain kubelet.sh[2579]: I0116 20:40:05.321477 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:40:06 localhost.localdomain sudo[5798]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps Jan 16 20:40:06 localhost.localdomain sudo[5798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:40:06 localhost.localdomain sudo[5798]: pam_unix(sudo:session): session closed for user root Jan 16 20:40:08 localhost.localdomain sudo[5809]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps Jan 16 20:40:08 localhost.localdomain bootkube.sh[5723]: Writing asset: /assets/kube-controller-manager-bootstrap/bootstrap-manifests/kube-controller-manager-pod.yaml Jan 16 20:40:08 localhost.localdomain sudo[5809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:40:08 localhost.localdomain bootkube.sh[5723]: Writing asset: /assets/kube-controller-manager-bootstrap/manifests/00_namespace-security-allocation-controller-clusterrolebinding.yaml Jan 16 20:40:08 localhost.localdomain bootkube.sh[5723]: Writing asset: /assets/kube-controller-manager-bootstrap/manifests/00_podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml Jan 16 20:40:08 localhost.localdomain bootkube.sh[5723]: Writing asset: /assets/kube-controller-manager-bootstrap/manifests/00_podsecurity-admission-label-syncer-controller-clusterrole.yaml Jan 16 20:40:08 localhost.localdomain bootkube.sh[5723]: Writing asset: /assets/kube-controller-manager-bootstrap/manifests/secret-initial-kube-controller-manager-service-account-private-key.yaml Jan 16 20:40:08 localhost.localdomain systemd[1]: libpod-ded78450e7c3ec478fb657a95bf0493759cd2362d8920e8ae4b1c1418c7f2df1.scope: Deactivated successfully. Jan 16 20:40:08 localhost.localdomain systemd[1]: libpod-ded78450e7c3ec478fb657a95bf0493759cd2362d8920e8ae4b1c1418c7f2df1.scope: Consumed 16.093s CPU time. Jan 16 20:40:08 localhost.localdomain bootkube.sh[5723]: Writing asset: /assets/kube-controller-manager-bootstrap/manifests/secret-csr-signer-signer.yaml Jan 16 20:40:08 localhost.localdomain bootkube.sh[5723]: Writing asset: /assets/kube-controller-manager-bootstrap/manifests/0000_00_namespace-openshift-infra.yaml Jan 16 20:40:08 localhost.localdomain bootkube.sh[5723]: Writing asset: /assets/kube-controller-manager-bootstrap/manifests/00_namespace-security-allocation-controller-clusterrole.yaml Jan 16 20:40:08 localhost.localdomain bootkube.sh[5723]: Writing asset: /assets/kube-controller-manager-bootstrap/manifests/00_openshift-kube-controller-manager-ns.yaml Jan 16 20:40:08 localhost.localdomain bootkube.sh[5723]: Writing asset: /assets/kube-controller-manager-bootstrap/manifests/00_openshift-kube-controller-manager-operator-ns.yaml Jan 16 20:40:08 localhost.localdomain bootkube.sh[5723]: Writing asset: /assets/kube-controller-manager-bootstrap/manifests/00_podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml Jan 16 20:40:08 localhost.localdomain bootkube.sh[5723]: Writing asset: /assets/kube-controller-manager-bootstrap/manifests/00_podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml Jan 16 20:40:08 localhost.localdomain sudo[5809]: pam_unix(sudo:session): session closed for user root Jan 16 20:40:08 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ded78450e7c3ec478fb657a95bf0493759cd2362d8920e8ae4b1c1418c7f2df1-userdata-shm.mount: Deactivated successfully. Jan 16 20:40:08 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-6e2153e40aa3871ea2ad127882fa8011e934c0a3343d468bd320a56e1654e288-merged.mount: Deactivated successfully. Jan 16 20:40:08 localhost.localdomain bootkube.sh[3228]: Rendering Kubernetes Scheduler core manifests... Jan 16 20:40:09 localhost.localdomain approve-csr.sh[5856]: E0116 20:40:09.268577 5856 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:40:09 localhost.localdomain approve-csr.sh[5856]: E0116 20:40:09.270207 5856 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:40:09 localhost.localdomain approve-csr.sh[5856]: E0116 20:40:09.271560 5856 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:40:09 localhost.localdomain approve-csr.sh[5856]: E0116 20:40:09.272396 5856 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:40:09 localhost.localdomain approve-csr.sh[5856]: E0116 20:40:09.273488 5856 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:40:09 localhost.localdomain approve-csr.sh[5856]: The connection to the server localhost:6443 was refused - did you specify the right host or port? Jan 16 20:40:15 localhost.localdomain kubelet.sh[2579]: I0116 20:40:15.359684 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:40:15 localhost.localdomain kubelet.sh[2579]: I0116 20:40:15.365134 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:40:15 localhost.localdomain kubelet.sh[2579]: I0116 20:40:15.365252 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:40:15 localhost.localdomain kubelet.sh[2579]: I0116 20:40:15.365283 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:40:19 localhost.localdomain sudo[5899]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps Jan 16 20:40:19 localhost.localdomain sudo[5899]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:40:21 localhost.localdomain sudo[5899]: pam_unix(sudo:session): session closed for user root Jan 16 20:40:21 localhost.localdomain systemd[1]: Started libcontainer container a651885ab21b007c77c827692b33521ed84c2e10ddbddcb5ae2de19b83ce0a4a. Jan 16 20:40:21 localhost.localdomain systemd[1]: run-runc-a651885ab21b007c77c827692b33521ed84c2e10ddbddcb5ae2de19b83ce0a4a-runc.1VQOE1.mount: Deactivated successfully. Jan 16 20:40:22 localhost.localdomain bootkube.sh[5866]: Writing asset: /assets/kube-scheduler-bootstrap/bootstrap-manifests/kube-scheduler-pod.yaml Jan 16 20:40:22 localhost.localdomain bootkube.sh[5866]: Writing asset: /assets/kube-scheduler-bootstrap/manifests/00_openshift-kube-scheduler-ns.yaml Jan 16 20:40:22 localhost.localdomain systemd[1]: libpod-a651885ab21b007c77c827692b33521ed84c2e10ddbddcb5ae2de19b83ce0a4a.scope: Deactivated successfully. Jan 16 20:40:22 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a651885ab21b007c77c827692b33521ed84c2e10ddbddcb5ae2de19b83ce0a4a-userdata-shm.mount: Deactivated successfully. Jan 16 20:40:22 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-619885989f16f74bc8d511d5350aa204aec3096d309a9d7c401f9409199df9c9-merged.mount: Deactivated successfully. Jan 16 20:40:22 localhost.localdomain bootkube.sh[3228]: Rendering Ingress Operator core manifests... Jan 16 20:40:25 localhost.localdomain kubelet.sh[2579]: I0116 20:40:25.420766 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:40:25 localhost.localdomain kubelet.sh[2579]: I0116 20:40:25.440564 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:40:25 localhost.localdomain kubelet.sh[2579]: I0116 20:40:25.440817 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:40:25 localhost.localdomain kubelet.sh[2579]: I0116 20:40:25.441178 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:40:29 localhost.localdomain approve-csr.sh[5999]: E0116 20:40:29.750488 5999 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:40:29 localhost.localdomain approve-csr.sh[5999]: E0116 20:40:29.751445 5999 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:40:29 localhost.localdomain approve-csr.sh[5999]: E0116 20:40:29.755565 5999 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:40:29 localhost.localdomain approve-csr.sh[5999]: E0116 20:40:29.762824 5999 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:40:29 localhost.localdomain approve-csr.sh[5999]: E0116 20:40:29.765066 5999 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:40:29 localhost.localdomain approve-csr.sh[5999]: The connection to the server localhost:6443 was refused - did you specify the right host or port? Jan 16 20:40:31 localhost.localdomain kubelet.sh[2579]: I0116 20:40:31.466758 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:40:31 localhost.localdomain kubelet.sh[2579]: I0116 20:40:31.470665 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:40:31 localhost.localdomain kubelet.sh[2579]: I0116 20:40:31.470757 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:40:31 localhost.localdomain kubelet.sh[2579]: I0116 20:40:31.470797 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:40:35 localhost.localdomain kubelet.sh[2579]: I0116 20:40:35.533688 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:40:35 localhost.localdomain kubelet.sh[2579]: I0116 20:40:35.538319 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:40:35 localhost.localdomain kubelet.sh[2579]: I0116 20:40:35.539037 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:40:35 localhost.localdomain kubelet.sh[2579]: I0116 20:40:35.539334 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:40:37 localhost.localdomain systemd[1]: Started libcontainer container 3742499bd51c2ecc670215dcc773b8e822fcc012532b222a51bacf258c33d1dd. Jan 16 20:40:38 localhost.localdomain bootkube.sh[5977]: wrote /assets/ingress-operator-manifests/cluster-ingress-00-custom-resource-definition.yaml Jan 16 20:40:38 localhost.localdomain bootkube.sh[5977]: wrote /assets/ingress-operator-manifests/cluster-ingress-00-namespace.yaml Jan 16 20:40:38 localhost.localdomain systemd[1]: libpod-3742499bd51c2ecc670215dcc773b8e822fcc012532b222a51bacf258c33d1dd.scope: Deactivated successfully. Jan 16 20:40:38 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3742499bd51c2ecc670215dcc773b8e822fcc012532b222a51bacf258c33d1dd-userdata-shm.mount: Deactivated successfully. Jan 16 20:40:38 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-270314005e8e698422e5f06c5a46c278b0694bcb9c6cb1027b68b9daa9a1c0e1-merged.mount: Deactivated successfully. Jan 16 20:40:38 localhost.localdomain bootkube.sh[3228]: Rendering Node Tuning core manifests... Jan 16 20:40:43 localhost.localdomain kubelet.sh[2579]: I0116 20:40:43.727208 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 20:40:45 localhost.localdomain kubelet.sh[2579]: I0116 20:40:45.634796 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:40:45 localhost.localdomain kubelet.sh[2579]: I0116 20:40:45.638229 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:40:45 localhost.localdomain kubelet.sh[2579]: I0116 20:40:45.638280 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:40:45 localhost.localdomain kubelet.sh[2579]: I0116 20:40:45.638310 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:40:50 localhost.localdomain approve-csr.sh[6116]: E0116 20:40:50.328836 6116 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:40:50 localhost.localdomain approve-csr.sh[6116]: E0116 20:40:50.337348 6116 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:40:50 localhost.localdomain approve-csr.sh[6116]: E0116 20:40:50.344991 6116 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:40:50 localhost.localdomain approve-csr.sh[6116]: E0116 20:40:50.349666 6116 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:40:50 localhost.localdomain approve-csr.sh[6116]: E0116 20:40:50.357376 6116 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:40:50 localhost.localdomain approve-csr.sh[6116]: The connection to the server localhost:6443 was refused - did you specify the right host or port? Jan 16 20:40:55 localhost.localdomain sudo[6128]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps Jan 16 20:40:55 localhost.localdomain sudo[6128]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:41:02 localhost.localdomain podman[4930]: Copying config sha256:57f293e35cf1d35f2ae2661d083e93eb1f506ec1e1fdd7997d0895fecec590b7 Jan 16 20:41:02 localhost.localdomain podman[4930]: Writing manifest to image destination Jan 16 20:41:02 localhost.localdomain podman[4930]: Storing signatures Jan 16 20:41:11 localhost.localdomain approve-csr.sh[6148]: E0116 20:41:11.223732 6148 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:41:11 localhost.localdomain approve-csr.sh[6148]: E0116 20:41:11.225077 6148 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:41:11 localhost.localdomain approve-csr.sh[6148]: E0116 20:41:11.225843 6148 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:41:11 localhost.localdomain approve-csr.sh[6148]: E0116 20:41:11.229212 6148 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:41:11 localhost.localdomain approve-csr.sh[6148]: E0116 20:41:11.230820 6148 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:41:11 localhost.localdomain approve-csr.sh[6148]: The connection to the server localhost:6443 was refused - did you specify the right host or port? Jan 16 20:41:12 localhost.localdomain kubelet.sh[2579]: I0116 20:41:12.362180 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:41:12 localhost.localdomain kubelet.sh[2579]: I0116 20:41:12.369166 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:41:12 localhost.localdomain kubelet.sh[2579]: I0116 20:41:12.369519 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:41:12 localhost.localdomain kubelet.sh[2579]: I0116 20:41:12.369583 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:41:14 localhost.localdomain sudo[6128]: pam_unix(sudo:session): session closed for user root Jan 16 20:41:15 localhost.localdomain systemd[1]: run-runc-98878eeb12717564cceb9b83ee35e10649b49c990f28eae80f44cfc25bd9baf5-runc.wi2U8z.mount: Deactivated successfully. Jan 16 20:41:15 localhost.localdomain systemd[1]: Started libcontainer container 98878eeb12717564cceb9b83ee35e10649b49c990f28eae80f44cfc25bd9baf5. Jan 16 20:41:15 localhost.localdomain NetworkManager[1706]: [1705437675.2106] manager: (vethc48d06d9): new Veth device (/org/freedesktop/NetworkManager/Devices/6) Jan 16 20:41:15 localhost.localdomain kernel: cni-podman0: port 1(vethc48d06d9) entered blocking state Jan 16 20:41:15 localhost.localdomain kernel: cni-podman0: port 1(vethc48d06d9) entered disabled state Jan 16 20:41:15 localhost.localdomain kernel: device vethc48d06d9 entered promiscuous mode Jan 16 20:41:15 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jan 16 20:41:15 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethc48d06d9: link becomes ready Jan 16 20:41:15 localhost.localdomain kernel: cni-podman0: port 1(vethc48d06d9) entered blocking state Jan 16 20:41:15 localhost.localdomain kernel: cni-podman0: port 1(vethc48d06d9) entered forwarding state Jan 16 20:41:15 localhost.localdomain NetworkManager[1706]: [1705437675.2906] device (vethc48d06d9): carrier: link connected Jan 16 20:41:15 localhost.localdomain NetworkManager[1706]: [1705437675.2932] device (cni-podman0): carrier: link connected Jan 16 20:41:15 localhost.localdomain bootkube.sh[6083]: I0116 20:41:15.654230 1 render.go:73] Rendering files into: /assets/node-tuning-bootstrap Jan 16 20:41:15 localhost.localdomain systemd[1]: Started libcontainer container 988af1d00c91e68249fa1e084f9c3c3864fd11d4efc36c05189450723ef244ba. Jan 16 20:41:15 localhost.localdomain bootkube.sh[6083]: I0116 20:41:15.861363 1 render.go:133] skipping "/assets/manifests/99_feature-gate.yaml" [1] manifest because of unhandled *v1.FeatureGate Jan 16 20:41:15 localhost.localdomain bootkube.sh[6083]: I0116 20:41:15.886201 1 render.go:133] skipping "/assets/manifests/cluster-dns-02-config.yml" [1] manifest because of unhandled *v1.DNS Jan 16 20:41:15 localhost.localdomain bootkube.sh[6083]: I0116 20:41:15.953515 1 render.go:133] skipping "/assets/manifests/cluster-ingress-02-config.yml" [1] manifest because of unhandled *v1.Ingress Jan 16 20:41:15 localhost.localdomain bootkube.sh[6083]: I0116 20:41:15.961122 1 render.go:133] skipping "/assets/manifests/cluster-network-02-config.yml" [1] manifest because of unhandled *v1.Network Jan 16 20:41:15 localhost.localdomain bootkube.sh[6083]: I0116 20:41:15.962289 1 render.go:133] skipping "/assets/manifests/cluster-proxy-01-config.yaml" [1] manifest because of unhandled *v1.Proxy Jan 16 20:41:15 localhost.localdomain bootkube.sh[6083]: I0116 20:41:15.964182 1 render.go:133] skipping "/assets/manifests/cluster-scheduler-02-config.yml" [1] manifest because of unhandled *v1.Scheduler Jan 16 20:41:15 localhost.localdomain bootkube.sh[6083]: I0116 20:41:15.969662 1 render.go:133] skipping "/assets/manifests/cvo-overrides.yaml" [1] manifest because of unhandled *v1.ClusterVersion Jan 16 20:41:16 localhost.localdomain bootkube.sh[6083]: W0116 20:41:16.009800 1 render.go:139] zero performance profiles were found Jan 16 20:41:16 localhost.localdomain systemd[1]: libpod-98878eeb12717564cceb9b83ee35e10649b49c990f28eae80f44cfc25bd9baf5.scope: Deactivated successfully. Jan 16 20:41:16 localhost.localdomain podman[6287]: extracting PXE files... Jan 16 20:41:16 localhost.localdomain podman[6291]: /shared/html/images/coreos-x86_64-initrd.img Jan 16 20:41:16 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-98878eeb12717564cceb9b83ee35e10649b49c990f28eae80f44cfc25bd9baf5-userdata-shm.mount: Deactivated successfully. Jan 16 20:41:16 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-ab86528527ff5b66b8abf97e60b4d1b4181ae874b63aa9245cf148f0d6731321-merged.mount: Deactivated successfully. Jan 16 20:41:16 localhost.localdomain podman[6291]: /shared/html/images/coreos-x86_64-rootfs.img Jan 16 20:41:16 localhost.localdomain bootkube.sh[3228]: Rendering MCO manifests... Jan 16 20:41:17 localhost.localdomain systemd[1]: Started libcontainer container 053dc3c679c83260c2aefd83abe0bf26a547ca7b7d968b98c4fe3eb0bb07864e. Jan 16 20:41:19 localhost.localdomain podman[6291]: /shared/html/images/coreos-x86_64-vmlinuz Jan 16 20:41:19 localhost.localdomain systemd[1]: libpod-053dc3c679c83260c2aefd83abe0bf26a547ca7b7d968b98c4fe3eb0bb07864e.scope: Deactivated successfully. Jan 16 20:41:19 localhost.localdomain podman[6370]: Extracting ISO file Jan 16 20:41:19 localhost.localdomain podman[6370]: Adding kernel argument ip=dhcp Jan 16 20:41:22 localhost.localdomain kubelet.sh[2579]: I0116 20:41:22.429204 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:41:22 localhost.localdomain kubelet.sh[2579]: I0116 20:41:22.441694 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:41:22 localhost.localdomain kubelet.sh[2579]: I0116 20:41:22.442371 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:41:22 localhost.localdomain kubelet.sh[2579]: I0116 20:41:22.442605 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:41:23 localhost.localdomain sudo[6384]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps Jan 16 20:41:23 localhost.localdomain sudo[6384]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:41:26 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-053dc3c679c83260c2aefd83abe0bf26a547ca7b7d968b98c4fe3eb0bb07864e-userdata-shm.mount: Deactivated successfully. Jan 16 20:41:26 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-57d6501ea5bccf4cd54e39f22ce5ce3c776124c7e3a4b00a6528d8f4eaa4f9b5-merged.mount: Deactivated successfully. Jan 16 20:41:26 localhost.localdomain systemd[1948]: Created slice User Background Tasks Slice. Jan 16 20:41:26 localhost.localdomain systemd[1948]: Starting Cleanup of User's Temporary Files and Directories... Jan 16 20:41:29 localhost.localdomain systemd[1]: libpod-988af1d00c91e68249fa1e084f9c3c3864fd11d4efc36c05189450723ef244ba.scope: Deactivated successfully. Jan 16 20:41:29 localhost.localdomain systemd[1]: libpod-988af1d00c91e68249fa1e084f9c3c3864fd11d4efc36c05189450723ef244ba.scope: Consumed 6.282s CPU time. Jan 16 20:41:30 localhost.localdomain sudo[6384]: pam_unix(sudo:session): session closed for user root Jan 16 20:41:30 localhost.localdomain kernel: cni-podman0: port 1(vethc48d06d9) entered disabled state Jan 16 20:41:30 localhost.localdomain kernel: device vethc48d06d9 left promiscuous mode Jan 16 20:41:30 localhost.localdomain kernel: cni-podman0: port 1(vethc48d06d9) entered disabled state Jan 16 20:41:31 localhost.localdomain systemd[1]: run-netns-netns\x2d96dd7fca\x2dad98\x2d38ee\x2d3195\x2d39be16c6de31.mount: Deactivated successfully. Jan 16 20:41:32 localhost.localdomain kubelet.sh[2579]: I0116 20:41:32.564372 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:41:32 localhost.localdomain kubelet.sh[2579]: I0116 20:41:32.577356 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:41:32 localhost.localdomain kubelet.sh[2579]: I0116 20:41:32.577653 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:41:32 localhost.localdomain kubelet.sh[2579]: I0116 20:41:32.577732 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:41:34 localhost.localdomain approve-csr.sh[6494]: E0116 20:41:34.136747 6494 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:41:34 localhost.localdomain approve-csr.sh[6494]: E0116 20:41:34.144392 6494 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:41:34 localhost.localdomain approve-csr.sh[6494]: E0116 20:41:34.149130 6494 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:41:34 localhost.localdomain approve-csr.sh[6494]: E0116 20:41:34.160283 6494 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:41:34 localhost.localdomain approve-csr.sh[6494]: E0116 20:41:34.164576 6494 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:41:34 localhost.localdomain approve-csr.sh[6494]: The connection to the server localhost:6443 was refused - did you specify the right host or port? Jan 16 20:41:34 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-988af1d00c91e68249fa1e084f9c3c3864fd11d4efc36c05189450723ef244ba-userdata-shm.mount: Deactivated successfully. Jan 16 20:41:34 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-4126416248b3ae7408f6d9a271bd884206318f1b2f8c6f4c1fe8d53b1a3b30aa-merged.mount: Deactivated successfully. Jan 16 20:41:34 localhost.localdomain systemd[1948]: Finished Cleanup of User's Temporary Files and Directories. Jan 16 20:41:34 localhost.localdomain systemd[1]: extract-machine-os.service: start operation timed out. Terminating. Jan 16 20:41:36 localhost.localdomain systemd[1]: extract-machine-os.service: Failed with result 'timeout'. Jan 16 20:41:36 localhost.localdomain systemd[1]: Failed to start Extract Machine OS Images. Jan 16 20:41:36 localhost.localdomain systemd[1]: Dependency failed for Customized Machine OS Image Server. Jan 16 20:41:36 localhost.localdomain systemd[1]: Dependency failed for Ironic baremetal deployment service. Jan 16 20:41:36 localhost.localdomain systemd[1]: Dependency failed for Ironic Inspector. Jan 16 20:41:36 localhost.localdomain systemd[1]: ironic-inspector.service: Job ironic-inspector.service/start failed with result 'dependency'. Jan 16 20:41:36 localhost.localdomain systemd[1]: ironic.service: Job ironic.service/start failed with result 'dependency'. Jan 16 20:41:36 localhost.localdomain systemd[1]: image-customization.service: Job image-customization.service/start failed with result 'dependency'. Jan 16 20:41:36 localhost.localdomain systemd[1]: extract-machine-os.service: Consumed 2min 22.199s CPU time. Jan 16 20:41:36 localhost.localdomain systemd[1]: Starting HTTP Server for Ironic... Jan 16 20:41:36 localhost.localdomain systemd[1]: Starting Ironic ramdisk logger... Jan 16 20:41:36 localhost.localdomain systemd[1]: Starting Update master BareMetalHosts with introspection data... Jan 16 20:41:36 localhost.localdomain sudo[6524]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps Jan 16 20:41:36 localhost.localdomain sudo[6524]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:41:37 localhost.localdomain sudo[6524]: pam_unix(sudo:session): session closed for user root Jan 16 20:41:37 localhost.localdomain podman[6536]: Jan 16 20:41:37 localhost.localdomain NetworkManager[1706]: [1705437697.7141] manager: (vethf636b070): new Veth device (/org/freedesktop/NetworkManager/Devices/7) Jan 16 20:41:37 localhost.localdomain kernel: cni-podman0: port 1(vethf636b070) entered blocking state Jan 16 20:41:37 localhost.localdomain kernel: cni-podman0: port 1(vethf636b070) entered disabled state Jan 16 20:41:37 localhost.localdomain kernel: device vethf636b070 entered promiscuous mode Jan 16 20:41:37 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jan 16 20:41:37 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethf636b070: link becomes ready Jan 16 20:41:37 localhost.localdomain kernel: cni-podman0: port 1(vethf636b070) entered blocking state Jan 16 20:41:37 localhost.localdomain kernel: cni-podman0: port 1(vethf636b070) entered forwarding state Jan 16 20:41:37 localhost.localdomain NetworkManager[1706]: [1705437697.7436] device (vethf636b070): carrier: link connected Jan 16 20:41:37 localhost.localdomain NetworkManager[1706]: [1705437697.7446] device (cni-podman0): carrier: link connected Jan 16 20:41:37 localhost.localdomain systemd[1]: Started HTTP Server for Ironic. Jan 16 20:41:37 localhost.localdomain ironic-httpd[6536]: 67123bdb7e6358fbf8248eee34d6256a5b9c414a586b5475d6127e8fbd7d7d3b Jan 16 20:41:37 localhost.localdomain httpd[6595]: ++ IRONIC_IP= Jan 16 20:41:37 localhost.localdomain httpd[6595]: ++ PROVISIONING_INTERFACE=ens4 Jan 16 20:41:37 localhost.localdomain httpd[6595]: ++ PROVISIONING_IP= Jan 16 20:41:37 localhost.localdomain httpd[6595]: ++ PROVISIONING_MACS= Jan 16 20:41:37 localhost.localdomain httpd[6595]: +++ get_provisioning_interface Jan 16 20:41:37 localhost.localdomain httpd[6595]: +++ [[ -n ens4 ]] Jan 16 20:41:37 localhost.localdomain httpd[6595]: +++ echo ens4 Jan 16 20:41:37 localhost.localdomain httpd[6595]: +++ return Jan 16 20:41:37 localhost.localdomain httpd[6595]: ++ PROVISIONING_INTERFACE=ens4 Jan 16 20:41:37 localhost.localdomain httpd[6595]: ++ export PROVISIONING_INTERFACE Jan 16 20:41:37 localhost.localdomain httpd[6595]: ++ export LISTEN_ALL_INTERFACES=true Jan 16 20:41:37 localhost.localdomain httpd[6595]: ++ LISTEN_ALL_INTERFACES=true Jan 16 20:41:37 localhost.localdomain httpd[6595]: ++ export IRONIC_PRIVATE_PORT=6388 Jan 16 20:41:37 localhost.localdomain httpd[6595]: ++ IRONIC_PRIVATE_PORT=6388 Jan 16 20:41:37 localhost.localdomain httpd[6595]: ++ export IRONIC_INSPECTOR_PRIVATE_PORT=5049 Jan 16 20:41:37 localhost.localdomain httpd[6595]: ++ IRONIC_INSPECTOR_PRIVATE_PORT=5049 Jan 16 20:41:37 localhost.localdomain httpd[6595]: ++ export IRONIC_ACCESS_PORT=6385 Jan 16 20:41:37 localhost.localdomain httpd[6595]: ++ IRONIC_ACCESS_PORT=6385 Jan 16 20:41:37 localhost.localdomain httpd[6595]: ++ export IRONIC_LISTEN_PORT=6385 Jan 16 20:41:37 localhost.localdomain httpd[6595]: ++ IRONIC_LISTEN_PORT=6385 Jan 16 20:41:37 localhost.localdomain httpd[6595]: ++ export IRONIC_INSPECTOR_ACCESS_PORT=5050 Jan 16 20:41:37 localhost.localdomain httpd[6595]: ++ IRONIC_INSPECTOR_ACCESS_PORT=5050 Jan 16 20:41:37 localhost.localdomain httpd[6595]: ++ export IRONIC_INSPECTOR_LISTEN_PORT=5050 Jan 16 20:41:37 localhost.localdomain httpd[6595]: ++ IRONIC_INSPECTOR_LISTEN_PORT=5050 Jan 16 20:41:37 localhost.localdomain httpd[6595]: + export HTTP_PORT=6180 Jan 16 20:41:37 localhost.localdomain httpd[6595]: + HTTP_PORT=6180 Jan 16 20:41:37 localhost.localdomain httpd[6595]: + export VMEDIA_TLS_PORT=8083 Jan 16 20:41:37 localhost.localdomain httpd[6595]: + VMEDIA_TLS_PORT=8083 Jan 16 20:41:37 localhost.localdomain httpd[6595]: + INSPECTOR_ORIG_HTTPD_CONFIG=/etc/httpd/conf.d/inspector-apache.conf.j2 Jan 16 20:41:37 localhost.localdomain httpd[6595]: + INSPECTOR_RESULT_HTTPD_CONFIG=/etc/httpd/conf.d/ironic-inspector.conf Jan 16 20:41:37 localhost.localdomain httpd[6595]: Waiting for ens4 interface to be configured Jan 16 20:41:37 localhost.localdomain httpd[6595]: + export IRONIC_REVERSE_PROXY_SETUP=false Jan 16 20:41:37 localhost.localdomain httpd[6595]: + IRONIC_REVERSE_PROXY_SETUP=false Jan 16 20:41:37 localhost.localdomain httpd[6595]: + export INSPECTOR_REVERSE_PROXY_SETUP=false Jan 16 20:41:37 localhost.localdomain httpd[6595]: + INSPECTOR_REVERSE_PROXY_SETUP=false Jan 16 20:41:37 localhost.localdomain httpd[6595]: + IRONIC_FAST_TRACK=true Jan 16 20:41:37 localhost.localdomain httpd[6595]: + wait_for_interface_or_ip Jan 16 20:41:37 localhost.localdomain httpd[6595]: + [[ -n '' ]] Jan 16 20:41:37 localhost.localdomain httpd[6595]: + [[ -n '' ]] Jan 16 20:41:37 localhost.localdomain httpd[6595]: + echo 'Waiting for ens4 interface to be configured' Jan 16 20:41:37 localhost.localdomain httpd[6595]: ++ awk '{print $3}' Jan 16 20:41:37 localhost.localdomain httpd[6595]: ++ sed -e 's%/.*%%' Jan 16 20:41:37 localhost.localdomain httpd[6595]: ++ ip -br add show scope global up dev ens4 Jan 16 20:41:37 localhost.localdomain httpd[6595]: ++ head -n 1 Jan 16 20:41:37 localhost.localdomain httpd[6595]: + IRONIC_IP=172.22.0.2 Jan 16 20:41:37 localhost.localdomain httpd[6595]: + export IRONIC_IP Jan 16 20:41:37 localhost.localdomain httpd[6595]: + sleep 1 Jan 16 20:41:38 localhost.localdomain master-bmh-update.sh[6581]: E0116 20:41:38.042300 6581 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:41:38 localhost.localdomain master-bmh-update.sh[6581]: E0116 20:41:38.043648 6581 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:41:38 localhost.localdomain master-bmh-update.sh[6581]: E0116 20:41:38.044699 6581 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:41:38 localhost.localdomain master-bmh-update.sh[6581]: E0116 20:41:38.047506 6581 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:41:38 localhost.localdomain master-bmh-update.sh[6581]: E0116 20:41:38.049683 6581 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:41:38 localhost.localdomain master-bmh-update.sh[6581]: The connection to the server localhost:6443 was refused - did you specify the right host or port? Jan 16 20:41:38 localhost.localdomain master-bmh-update.sh[6528]: Waiting for BareMetalHosts to appear... Jan 16 20:41:38 localhost.localdomain systemd[1]: Started Ironic ramdisk logger. Jan 16 20:41:38 localhost.localdomain httpd[6595]: + [[ -n 172.22.0.2 ]] Jan 16 20:41:38 localhost.localdomain httpd[6595]: + [[ 172.22.0.2 =~ .*:.* ]] Jan 16 20:41:38 localhost.localdomain httpd[6595]: + export IPV=4 Jan 16 20:41:38 localhost.localdomain httpd[6595]: + IPV=4 Jan 16 20:41:38 localhost.localdomain httpd[6595]: + export IRONIC_URL_HOST=172.22.0.2 Jan 16 20:41:38 localhost.localdomain httpd[6595]: + IRONIC_URL_HOST=172.22.0.2 Jan 16 20:41:38 localhost.localdomain httpd[6595]: + mkdir -p /shared/html Jan 16 20:41:38 localhost.localdomain httpd[6595]: + chmod 0777 /shared/html Jan 16 20:41:38 localhost.localdomain httpd[6595]: + IRONIC_BASE_URL=http://172.22.0.2 Jan 16 20:41:38 localhost.localdomain httpd[6595]: + INSPECTOR_EXTRA_ARGS=' ipa-inspection-callback-url=http://172.22.0.2:5050/v1/continue' Jan 16 20:41:38 localhost.localdomain httpd[6595]: + [[ true == \t\r\u\e ]] Jan 16 20:41:38 localhost.localdomain httpd[6595]: + INSPECTOR_EXTRA_ARGS+=' ipa-api-url=http://172.22.0.2:6385' Jan 16 20:41:38 localhost.localdomain httpd[6595]: + export INSPECTOR_EXTRA_ARGS Jan 16 20:41:38 localhost.localdomain httpd[6595]: + . /bin/coreos-ipa-common.sh Jan 16 20:41:38 localhost.localdomain httpd[6595]: ++ ROOTFS_FILE=/shared/html/images/ironic-python-agent.rootfs Jan 16 20:41:38 localhost.localdomain httpd[6595]: ++ IGNITION_FILE=/shared/html/ironic-python-agent.ign Jan 16 20:41:38 localhost.localdomain httpd[6595]: ++ ISO_FILE=/shared/html/images/ironic-python-agent.iso Jan 16 20:41:38 localhost.localdomain httpd[6595]: ++ use_coreos_ipa Jan 16 20:41:38 localhost.localdomain httpd[6595]: ++ [[ -f /shared/html/images/ironic-python-agent.rootfs ]] Jan 16 20:41:38 localhost.localdomain httpd[6595]: ++ return 0 Jan 16 20:41:38 localhost.localdomain httpd[6595]: +++ coreos_kernel_params Jan 16 20:41:38 localhost.localdomain httpd[6595]: +++ echo -n coreos.live.rootfs_url=http://172.22.0.2:6180/images/ironic-python-agent.rootfs Jan 16 20:41:38 localhost.localdomain httpd[6595]: +++ [[ -f /shared/html/ironic-python-agent.ign ]] Jan 16 20:41:38 localhost.localdomain httpd[6595]: +++ echo ' ignition.firstboot ignition.platform.id=metal' Jan 16 20:41:38 localhost.localdomain httpd[6595]: ++ IRONIC_KERNEL_PARAMS=' coreos.live.rootfs_url=http://172.22.0.2:6180/images/ironic-python-agent.rootfs ignition.firstboot ignition.platform.id=metal' Jan 16 20:41:38 localhost.localdomain httpd[6595]: ++ export IRONIC_KERNEL_PARAMS Jan 16 20:41:38 localhost.localdomain httpd[6595]: + render_j2_config /tmp/inspector.ipxe.j2 /shared/html/inspector.ipxe Jan 16 20:41:38 localhost.localdomain httpd[6595]: + python3 -c 'import os; import sys; import jinja2; sys.stdout.write(jinja2.Template(sys.stdin.read()).render(env=os.environ))' Jan 16 20:41:39 localhost.localdomain httpd[6595]: + cp /tmp/uefi_esp.img /shared/html/uefi_esp.img Jan 16 20:41:39 localhost.localdomain httpd[6595]: + [[ false == \t\r\u\e ]] Jan 16 20:41:39 localhost.localdomain httpd[6595]: + export INSPECTOR_REVERSE_PROXY_SETUP=false Jan 16 20:41:39 localhost.localdomain httpd[6595]: + INSPECTOR_REVERSE_PROXY_SETUP=false Jan 16 20:41:39 localhost.localdomain httpd[6595]: + [[ false == \t\r\u\e ]] Jan 16 20:41:39 localhost.localdomain httpd[6595]: + export IRONIC_REVERSE_PROXY_SETUP=false Jan 16 20:41:39 localhost.localdomain httpd[6595]: + IRONIC_REVERSE_PROXY_SETUP=false Jan 16 20:41:39 localhost.localdomain httpd[6595]: + export IRONIC_HTPASSWD= Jan 16 20:41:39 localhost.localdomain httpd[6595]: + IRONIC_HTPASSWD= Jan 16 20:41:39 localhost.localdomain httpd[6595]: + export INSPECTOR_HTPASSWD= Jan 16 20:41:39 localhost.localdomain httpd[6595]: + INSPECTOR_HTPASSWD= Jan 16 20:41:39 localhost.localdomain httpd[6595]: + [[ -n '' ]] Jan 16 20:41:39 localhost.localdomain httpd[6595]: + [[ -n '' ]] Jan 16 20:41:39 localhost.localdomain httpd[6595]: + [[ true == \t\r\u\e ]] Jan 16 20:41:39 localhost.localdomain httpd[6595]: + sed -i 's/^Listen .*$/Listen [::]:6180/' /etc/httpd/conf/httpd.conf Jan 16 20:41:39 localhost.localdomain httpd[6595]: + sed -i -e 's|\(^[[:space:]]*\)\(DocumentRoot\)\(.*\)|\1\2 "/shared/html"|' -e 's|||' -e 's|||' /etc/httpd/conf/httpd.conf Jan 16 20:41:39 localhost.localdomain httpd[6595]: + sed -i -e 's%^ \+CustomLog.*% CustomLog /dev/stderr combined%g' /etc/httpd/conf/httpd.conf Jan 16 20:41:39 localhost.localdomain httpd[6595]: + sed -i -e 's%^ErrorLog.*%ErrorLog /dev/stderr%g' /etc/httpd/conf/httpd.conf Jan 16 20:41:39 localhost.localdomain httpd[6595]: + cat Jan 16 20:41:39 localhost.localdomain httpd[6595]: + [[ false == \t\r\u\e ]] Jan 16 20:41:39 localhost.localdomain httpd[6595]: + [[ false == \t\r\u\e ]] Jan 16 20:41:39 localhost.localdomain httpd[6595]: + [[ false == \t\r\u\e ]] Jan 16 20:41:39 localhost.localdomain httpd[6595]: + [[ false == \t\r\u\e ]] Jan 16 20:41:39 localhost.localdomain httpd[6595]: + exec /usr/sbin/httpd -DFOREGROUND Jan 16 20:41:39 localhost.localdomain httpd[6595]: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using localhost.localdomain. Set the 'ServerName' directive globally to suppress this message Jan 16 20:41:39 localhost.localdomain httpd[6595]: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using localhost.localdomain. Set the 'ServerName' directive globally to suppress this message Jan 16 20:41:39 localhost.localdomain httpd[6595]: [Tue Jan 16 20:41:39.348884 2024] [ssl:warn] [pid 1:tid 1] AH01873: Init: Session Cache is not configured [hint: SSLSessionCache] Jan 16 20:41:39 localhost.localdomain httpd[6595]: [Tue Jan 16 20:41:39.352197 2024] [mpm_event:notice] [pid 1:tid 1] AH00489: Apache/2.4.53 (Red Hat Enterprise Linux) mod_wsgi/4.7.1 Python/3.9 OpenSSL/3.0.7 configured -- resuming normal operations Jan 16 20:41:39 localhost.localdomain httpd[6595]: [Tue Jan 16 20:41:39.352295 2024] [core:notice] [pid 1:tid 1] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND' Jan 16 20:41:39 localhost.localdomain sudo[6697]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps Jan 16 20:41:39 localhost.localdomain sudo[6697]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:41:39 localhost.localdomain sudo[6697]: pam_unix(sudo:session): session closed for user root Jan 16 20:41:42 localhost.localdomain kubelet.sh[2579]: I0116 20:41:42.654342 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:41:42 localhost.localdomain kubelet.sh[2579]: I0116 20:41:42.681590 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:41:42 localhost.localdomain kubelet.sh[2579]: I0116 20:41:42.684712 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:41:42 localhost.localdomain kubelet.sh[2579]: I0116 20:41:42.684860 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:41:43 localhost.localdomain kubelet.sh[2579]: I0116 20:41:43.728673 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 20:41:43 localhost.localdomain sudo[6792]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps Jan 16 20:41:43 localhost.localdomain sudo[6792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:41:43 localhost.localdomain sudo[6792]: pam_unix(sudo:session): session closed for user root Jan 16 20:41:45 localhost.localdomain kubelet.sh[2579]: I0116 20:41:45.470464 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:41:45 localhost.localdomain kubelet.sh[2579]: I0116 20:41:45.483381 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:41:45 localhost.localdomain kubelet.sh[2579]: I0116 20:41:45.483676 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:41:45 localhost.localdomain kubelet.sh[2579]: I0116 20:41:45.484068 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:41:46 localhost.localdomain systemd[1]: extract-machine-os.service: Scheduled restart job, restart counter is at 1. Jan 16 20:41:46 localhost.localdomain systemd[1]: Stopped Extract Machine OS Images. Jan 16 20:41:46 localhost.localdomain systemd[1]: extract-machine-os.service: Consumed 2min 22.199s CPU time. Jan 16 20:41:46 localhost.localdomain systemd[1]: Starting Extract Machine OS Images... Jan 16 20:41:47 localhost.localdomain NetworkManager[1706]: [1705437707.1102] manager: (veth18c300f2): new Veth device (/org/freedesktop/NetworkManager/Devices/8) Jan 16 20:41:47 localhost.localdomain kernel: cni-podman0: port 2(veth18c300f2) entered blocking state Jan 16 20:41:47 localhost.localdomain kernel: cni-podman0: port 2(veth18c300f2) entered disabled state Jan 16 20:41:47 localhost.localdomain kernel: device veth18c300f2 entered promiscuous mode Jan 16 20:41:47 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jan 16 20:41:47 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth18c300f2: link becomes ready Jan 16 20:41:47 localhost.localdomain kernel: cni-podman0: port 2(veth18c300f2) entered blocking state Jan 16 20:41:47 localhost.localdomain kernel: cni-podman0: port 2(veth18c300f2) entered forwarding state Jan 16 20:41:47 localhost.localdomain NetworkManager[1706]: [1705437707.1527] device (veth18c300f2): carrier: link connected Jan 16 20:41:47 localhost.localdomain systemd[1]: run-runc-2c8e369d7d138aba7a0f87fca9e67dd94d9565e336196ac4741db7a5c6c840a1-runc.1j3Jz8.mount: Deactivated successfully. Jan 16 20:41:47 localhost.localdomain systemd[1]: Started libcontainer container 2c8e369d7d138aba7a0f87fca9e67dd94d9565e336196ac4741db7a5c6c840a1. Jan 16 20:41:47 localhost.localdomain podman[6918]: /shared/html/images//coreos-x86_64-[vmlinuz|initrd.img|rootfs.img] are already up to date Jan 16 20:41:47 localhost.localdomain podman[6926]: /shared/html/images//coreos-x86_64.iso is already up to date Jan 16 20:41:47 localhost.localdomain systemd[1]: libpod-2c8e369d7d138aba7a0f87fca9e67dd94d9565e336196ac4741db7a5c6c840a1.scope: Deactivated successfully. Jan 16 20:41:48 localhost.localdomain kernel: cni-podman0: port 2(veth18c300f2) entered disabled state Jan 16 20:41:48 localhost.localdomain kernel: device veth18c300f2 left promiscuous mode Jan 16 20:41:48 localhost.localdomain kernel: cni-podman0: port 2(veth18c300f2) entered disabled state Jan 16 20:41:48 localhost.localdomain systemd[1]: run-netns-netns\x2dddd6e658\x2d6b38\x2d27c9\x2dcf8e\x2dce8e3ba4f1ce.mount: Deactivated successfully. Jan 16 20:41:48 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2c8e369d7d138aba7a0f87fca9e67dd94d9565e336196ac4741db7a5c6c840a1-userdata-shm.mount: Deactivated successfully. Jan 16 20:41:48 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-6f4375a79f86b9d4e3a7face68af9a9219d4e659601ecfc52c55fff586263c9a-merged.mount: Deactivated successfully. Jan 16 20:41:48 localhost.localdomain systemd[1]: Finished Extract Machine OS Images. Jan 16 20:41:52 localhost.localdomain kubelet.sh[2579]: I0116 20:41:52.750321 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:41:52 localhost.localdomain kubelet.sh[2579]: I0116 20:41:52.759075 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:41:52 localhost.localdomain kubelet.sh[2579]: I0116 20:41:52.759477 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:41:52 localhost.localdomain kubelet.sh[2579]: I0116 20:41:52.759549 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:41:54 localhost.localdomain approve-csr.sh[7017]: E0116 20:41:54.650057 7017 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:41:54 localhost.localdomain approve-csr.sh[7017]: E0116 20:41:54.654176 7017 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:41:54 localhost.localdomain approve-csr.sh[7017]: E0116 20:41:54.663219 7017 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:41:54 localhost.localdomain approve-csr.sh[7017]: E0116 20:41:54.666463 7017 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:41:54 localhost.localdomain approve-csr.sh[7017]: E0116 20:41:54.669488 7017 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:41:54 localhost.localdomain approve-csr.sh[7017]: The connection to the server localhost:6443 was refused - did you specify the right host or port? Jan 16 20:41:56 localhost.localdomain systemd[1]: Started libcontainer container 305da686d90346bf98043cd90bcc631e1ffa68b6a7a56b295f27985d1ac85bfb. Jan 16 20:41:57 localhost.localdomain bootkube.sh[6407]: I0116 20:41:57.135503 1 bootstrap.go:118] Version: v4.14.0-202312120433.p0.g7649b92.assembly.stream-dirty (7649b9274cde2fb50a61a579e3891c8ead2d79c5) Jan 16 20:41:57 localhost.localdomain bootkube.sh[6407]: I0116 20:41:57.168389 1 bootstrap.go:189] manifests/machineconfigcontroller/controllerconfig.yaml Jan 16 20:41:57 localhost.localdomain bootkube.sh[6407]: I0116 20:41:57.178568 1 bootstrap.go:189] manifests/master.machineconfigpool.yaml Jan 16 20:41:57 localhost.localdomain bootkube.sh[6407]: I0116 20:41:57.179857 1 bootstrap.go:189] manifests/worker.machineconfigpool.yaml Jan 16 20:41:57 localhost.localdomain bootkube.sh[6407]: I0116 20:41:57.181233 1 bootstrap.go:189] manifests/bootstrap-pod-v2.yaml Jan 16 20:41:57 localhost.localdomain bootkube.sh[6407]: I0116 20:41:57.182619 1 bootstrap.go:189] manifests/machineconfigserver/csr-bootstrap-role-binding.yaml Jan 16 20:41:57 localhost.localdomain bootkube.sh[6407]: I0116 20:41:57.183881 1 bootstrap.go:189] manifests/machineconfigserver/kube-apiserver-serving-ca-configmap.yaml Jan 16 20:41:57 localhost.localdomain bootkube.sh[6407]: I0116 20:41:57.185366 1 bootstrap.go:189] manifests/on-prem/coredns.yaml Jan 16 20:41:57 localhost.localdomain bootkube.sh[6407]: I0116 20:41:57.187685 1 bootstrap.go:189] manifests/on-prem/coredns-corefile.tmpl Jan 16 20:41:57 localhost.localdomain bootkube.sh[6407]: I0116 20:41:57.205435 1 bootstrap.go:189] manifests/on-prem/keepalived.yaml Jan 16 20:41:57 localhost.localdomain bootkube.sh[6407]: I0116 20:41:57.210591 1 bootstrap.go:189] manifests/on-prem/keepalived.conf.tmpl Jan 16 20:41:57 localhost.localdomain systemd[1]: libpod-305da686d90346bf98043cd90bcc631e1ffa68b6a7a56b295f27985d1ac85bfb.scope: Deactivated successfully. Jan 16 20:41:57 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-305da686d90346bf98043cd90bcc631e1ffa68b6a7a56b295f27985d1ac85bfb-userdata-shm.mount: Deactivated successfully. Jan 16 20:41:57 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-d12605c7d68948cff50068ccaaec677e5c2b2ecb78123095c259600a3b6c7143-merged.mount: Deactivated successfully. Jan 16 20:41:57 localhost.localdomain kubelet.sh[2579]: I0116 20:41:57.897326 2579 kubelet.go:2425] "SyncLoop ADD" source="file" pods=[default/bootstrap-machine-config-operator-localhost.localdomain] Jan 16 20:41:57 localhost.localdomain kubelet.sh[2579]: I0116 20:41:57.898196 2579 topology_manager.go:212] "Topology Admit Handler" podUID=543511857c8f22a7df82dd78b38d8f78 podNamespace="default" podName="bootstrap-machine-config-operator-localhost.localdomain" Jan 16 20:41:57 localhost.localdomain kubelet.sh[2579]: I0116 20:41:57.899203 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:41:57 localhost.localdomain kubelet.sh[2579]: I0116 20:41:57.912682 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:41:57 localhost.localdomain kubelet.sh[2579]: I0116 20:41:57.913872 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:41:57 localhost.localdomain kubelet.sh[2579]: I0116 20:41:57.914505 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:41:57 localhost.localdomain kubelet.sh[2579]: I0116 20:41:57.915671 2579 kubelet.go:2425] "SyncLoop ADD" source="file" pods=[openshift-kni-infra/coredns-localhost.localdomain] Jan 16 20:41:57 localhost.localdomain kubelet.sh[2579]: I0116 20:41:57.916354 2579 topology_manager.go:212] "Topology Admit Handler" podUID=8fbf03b752412e8c829ad5b819ca09f0 podNamespace="openshift-kni-infra" podName="coredns-localhost.localdomain" Jan 16 20:41:57 localhost.localdomain kubelet.sh[2579]: I0116 20:41:57.917083 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:41:57 localhost.localdomain kubelet.sh[2579]: I0116 20:41:57.923095 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:41:57 localhost.localdomain kubelet.sh[2579]: I0116 20:41:57.923284 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:41:57 localhost.localdomain kubelet.sh[2579]: I0116 20:41:57.923374 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:41:57 localhost.localdomain kubelet.sh[2579]: I0116 20:41:57.925520 2579 kubelet.go:2425] "SyncLoop ADD" source="file" pods=[openshift-kni-infra/keepalived-localhost.localdomain] Jan 16 20:41:57 localhost.localdomain kubelet.sh[2579]: I0116 20:41:57.926209 2579 topology_manager.go:212] "Topology Admit Handler" podUID=f3cb0bd9c64889e06acccc1066e67828 podNamespace="openshift-kni-infra" podName="keepalived-localhost.localdomain" Jan 16 20:41:57 localhost.localdomain kubelet.sh[2579]: I0116 20:41:57.926669 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:41:57 localhost.localdomain kubelet.sh[2579]: I0116 20:41:57.932323 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:41:57 localhost.localdomain kubelet.sh[2579]: I0116 20:41:57.933356 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:41:57 localhost.localdomain kubelet.sh[2579]: I0116 20:41:57.933425 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:41:57 localhost.localdomain systemd[1]: Created slice libcontainer container kubepods-burstable-pod543511857c8f22a7df82dd78b38d8f78.slice. Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.011432 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.023700 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.023914 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.024188 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:41:58 localhost.localdomain systemd[1]: Created slice libcontainer container kubepods-burstable-pod8fbf03b752412e8c829ad5b819ca09f0.slice. Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.041266 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fbf03b752412e8c829ad5b819ca09f0-kubeconfig\") pod \"coredns-localhost.localdomain\" (UID: \"8fbf03b752412e8c829ad5b819ca09f0\") " pod="openshift-kni-infra/coredns-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.041670 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f3cb0bd9c64889e06acccc1066e67828-kubeconfig\") pod \"keepalived-localhost.localdomain\" (UID: \"f3cb0bd9c64889e06acccc1066e67828\") " pod="openshift-kni-infra/keepalived-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.042154 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-kubeconfig\" (UniqueName: \"kubernetes.io/host-path/543511857c8f22a7df82dd78b38d8f78-bootstrap-kubeconfig\") pod \"bootstrap-machine-config-operator-localhost.localdomain\" (UID: \"543511857c8f22a7df82dd78b38d8f78\") " pod="default/bootstrap-machine-config-operator-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.042381 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-basedir\" (UniqueName: \"kubernetes.io/host-path/543511857c8f22a7df82dd78b38d8f78-server-basedir\") pod \"bootstrap-machine-config-operator-localhost.localdomain\" (UID: \"543511857c8f22a7df82dd78b38d8f78\") " pod="default/bootstrap-machine-config-operator-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.045200 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.050857 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.051203 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.051267 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.042486 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8fbf03b752412e8c829ad5b819ca09f0-manifests\") pod \"coredns-localhost.localdomain\" (UID: \"8fbf03b752412e8c829ad5b819ca09f0\") " pod="openshift-kni-infra/coredns-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.055457 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"conf-dir\" (UniqueName: \"kubernetes.io/host-path/f3cb0bd9c64889e06acccc1066e67828-conf-dir\") pod \"keepalived-localhost.localdomain\" (UID: \"f3cb0bd9c64889e06acccc1066e67828\") " pod="openshift-kni-infra/keepalived-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.055874 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f3cb0bd9c64889e06acccc1066e67828-manifests\") pod \"keepalived-localhost.localdomain\" (UID: \"f3cb0bd9c64889e06acccc1066e67828\") " pod="openshift-kni-infra/keepalived-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.056231 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-certs\" (UniqueName: \"kubernetes.io/host-path/543511857c8f22a7df82dd78b38d8f78-server-certs\") pod \"bootstrap-machine-config-operator-localhost.localdomain\" (UID: \"543511857c8f22a7df82dd78b38d8f78\") " pod="default/bootstrap-machine-config-operator-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.056341 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-manifests\" (UniqueName: \"kubernetes.io/host-path/543511857c8f22a7df82dd78b38d8f78-bootstrap-manifests\") pod \"bootstrap-machine-config-operator-localhost.localdomain\" (UID: \"543511857c8f22a7df82dd78b38d8f78\") " pod="default/bootstrap-machine-config-operator-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.058061 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8fbf03b752412e8c829ad5b819ca09f0-resource-dir\") pod \"coredns-localhost.localdomain\" (UID: \"8fbf03b752412e8c829ad5b819ca09f0\") " pod="openshift-kni-infra/coredns-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.058863 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"conf-dir\" (UniqueName: \"kubernetes.io/empty-dir/8fbf03b752412e8c829ad5b819ca09f0-conf-dir\") pod \"coredns-localhost.localdomain\" (UID: \"8fbf03b752412e8c829ad5b819ca09f0\") " pod="openshift-kni-infra/coredns-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.060530 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f3cb0bd9c64889e06acccc1066e67828-resource-dir\") pod \"keepalived-localhost.localdomain\" (UID: \"f3cb0bd9c64889e06acccc1066e67828\") " pod="openshift-kni-infra/keepalived-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.063460 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-dir\" (UniqueName: \"kubernetes.io/empty-dir/f3cb0bd9c64889e06acccc1066e67828-run-dir\") pod \"keepalived-localhost.localdomain\" (UID: \"f3cb0bd9c64889e06acccc1066e67828\") " pod="openshift-kni-infra/keepalived-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain systemd[1]: Created slice libcontainer container kubepods-burstable-podf3cb0bd9c64889e06acccc1066e67828.slice. Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.092287 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.100516 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.100712 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.100878 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.167470 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fbf03b752412e8c829ad5b819ca09f0-kubeconfig\") pod \"coredns-localhost.localdomain\" (UID: \"8fbf03b752412e8c829ad5b819ca09f0\") " pod="openshift-kni-infra/coredns-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.167855 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f3cb0bd9c64889e06acccc1066e67828-kubeconfig\") pod \"keepalived-localhost.localdomain\" (UID: \"f3cb0bd9c64889e06acccc1066e67828\") " pod="openshift-kni-infra/keepalived-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.168151 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bootstrap-kubeconfig\" (UniqueName: \"kubernetes.io/host-path/543511857c8f22a7df82dd78b38d8f78-bootstrap-kubeconfig\") pod \"bootstrap-machine-config-operator-localhost.localdomain\" (UID: \"543511857c8f22a7df82dd78b38d8f78\") " pod="default/bootstrap-machine-config-operator-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.168254 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"server-basedir\" (UniqueName: \"kubernetes.io/host-path/543511857c8f22a7df82dd78b38d8f78-server-basedir\") pod \"bootstrap-machine-config-operator-localhost.localdomain\" (UID: \"543511857c8f22a7df82dd78b38d8f78\") " pod="default/bootstrap-machine-config-operator-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.168340 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8fbf03b752412e8c829ad5b819ca09f0-manifests\") pod \"coredns-localhost.localdomain\" (UID: \"8fbf03b752412e8c829ad5b819ca09f0\") " pod="openshift-kni-infra/coredns-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.168433 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"conf-dir\" (UniqueName: \"kubernetes.io/host-path/f3cb0bd9c64889e06acccc1066e67828-conf-dir\") pod \"keepalived-localhost.localdomain\" (UID: \"f3cb0bd9c64889e06acccc1066e67828\") " pod="openshift-kni-infra/keepalived-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.168516 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f3cb0bd9c64889e06acccc1066e67828-manifests\") pod \"keepalived-localhost.localdomain\" (UID: \"f3cb0bd9c64889e06acccc1066e67828\") " pod="openshift-kni-infra/keepalived-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.168600 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"run-dir\" (UniqueName: \"kubernetes.io/empty-dir/f3cb0bd9c64889e06acccc1066e67828-run-dir\") pod \"keepalived-localhost.localdomain\" (UID: \"f3cb0bd9c64889e06acccc1066e67828\") " pod="openshift-kni-infra/keepalived-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.168690 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"server-certs\" (UniqueName: \"kubernetes.io/host-path/543511857c8f22a7df82dd78b38d8f78-server-certs\") pod \"bootstrap-machine-config-operator-localhost.localdomain\" (UID: \"543511857c8f22a7df82dd78b38d8f78\") " pod="default/bootstrap-machine-config-operator-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.169067 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bootstrap-manifests\" (UniqueName: \"kubernetes.io/host-path/543511857c8f22a7df82dd78b38d8f78-bootstrap-manifests\") pod \"bootstrap-machine-config-operator-localhost.localdomain\" (UID: \"543511857c8f22a7df82dd78b38d8f78\") " pod="default/bootstrap-machine-config-operator-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.169179 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8fbf03b752412e8c829ad5b819ca09f0-resource-dir\") pod \"coredns-localhost.localdomain\" (UID: \"8fbf03b752412e8c829ad5b819ca09f0\") " pod="openshift-kni-infra/coredns-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.169264 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"conf-dir\" (UniqueName: \"kubernetes.io/empty-dir/8fbf03b752412e8c829ad5b819ca09f0-conf-dir\") pod \"coredns-localhost.localdomain\" (UID: \"8fbf03b752412e8c829ad5b819ca09f0\") " pod="openshift-kni-infra/coredns-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.169347 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f3cb0bd9c64889e06acccc1066e67828-resource-dir\") pod \"keepalived-localhost.localdomain\" (UID: \"f3cb0bd9c64889e06acccc1066e67828\") " pod="openshift-kni-infra/keepalived-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.169617 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f3cb0bd9c64889e06acccc1066e67828-resource-dir\") pod \"keepalived-localhost.localdomain\" (UID: \"f3cb0bd9c64889e06acccc1066e67828\") " pod="openshift-kni-infra/keepalived-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.170096 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fbf03b752412e8c829ad5b819ca09f0-kubeconfig\") pod \"coredns-localhost.localdomain\" (UID: \"8fbf03b752412e8c829ad5b819ca09f0\") " pod="openshift-kni-infra/coredns-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.170225 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f3cb0bd9c64889e06acccc1066e67828-kubeconfig\") pod \"keepalived-localhost.localdomain\" (UID: \"f3cb0bd9c64889e06acccc1066e67828\") " pod="openshift-kni-infra/keepalived-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.170345 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"bootstrap-kubeconfig\" (UniqueName: \"kubernetes.io/host-path/543511857c8f22a7df82dd78b38d8f78-bootstrap-kubeconfig\") pod \"bootstrap-machine-config-operator-localhost.localdomain\" (UID: \"543511857c8f22a7df82dd78b38d8f78\") " pod="default/bootstrap-machine-config-operator-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.170453 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"server-basedir\" (UniqueName: \"kubernetes.io/host-path/543511857c8f22a7df82dd78b38d8f78-server-basedir\") pod \"bootstrap-machine-config-operator-localhost.localdomain\" (UID: \"543511857c8f22a7df82dd78b38d8f78\") " pod="default/bootstrap-machine-config-operator-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.170553 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8fbf03b752412e8c829ad5b819ca09f0-manifests\") pod \"coredns-localhost.localdomain\" (UID: \"8fbf03b752412e8c829ad5b819ca09f0\") " pod="openshift-kni-infra/coredns-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.170680 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"conf-dir\" (UniqueName: \"kubernetes.io/host-path/f3cb0bd9c64889e06acccc1066e67828-conf-dir\") pod \"keepalived-localhost.localdomain\" (UID: \"f3cb0bd9c64889e06acccc1066e67828\") " pod="openshift-kni-infra/keepalived-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.171089 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f3cb0bd9c64889e06acccc1066e67828-manifests\") pod \"keepalived-localhost.localdomain\" (UID: \"f3cb0bd9c64889e06acccc1066e67828\") " pod="openshift-kni-infra/keepalived-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.173235 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"run-dir\" (UniqueName: \"kubernetes.io/empty-dir/f3cb0bd9c64889e06acccc1066e67828-run-dir\") pod \"keepalived-localhost.localdomain\" (UID: \"f3cb0bd9c64889e06acccc1066e67828\") " pod="openshift-kni-infra/keepalived-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.173504 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"server-certs\" (UniqueName: \"kubernetes.io/host-path/543511857c8f22a7df82dd78b38d8f78-server-certs\") pod \"bootstrap-machine-config-operator-localhost.localdomain\" (UID: \"543511857c8f22a7df82dd78b38d8f78\") " pod="default/bootstrap-machine-config-operator-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.173850 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"bootstrap-manifests\" (UniqueName: \"kubernetes.io/host-path/543511857c8f22a7df82dd78b38d8f78-bootstrap-manifests\") pod \"bootstrap-machine-config-operator-localhost.localdomain\" (UID: \"543511857c8f22a7df82dd78b38d8f78\") " pod="default/bootstrap-machine-config-operator-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.174175 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8fbf03b752412e8c829ad5b819ca09f0-resource-dir\") pod \"coredns-localhost.localdomain\" (UID: \"8fbf03b752412e8c829ad5b819ca09f0\") " pod="openshift-kni-infra/coredns-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.175242 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"conf-dir\" (UniqueName: \"kubernetes.io/empty-dir/8fbf03b752412e8c829ad5b819ca09f0-conf-dir\") pod \"coredns-localhost.localdomain\" (UID: \"8fbf03b752412e8c829ad5b819ca09f0\") " pod="openshift-kni-infra/coredns-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain bootkube.sh[3228]: Check if API and API-Int URLs are resolvable during bootstrap Jan 16 20:41:58 localhost.localdomain bootkube.sh[3228]: Checking if api.lab.ocpipi.lan of type API_URL is resolvable Jan 16 20:41:58 localhost.localdomain bootkube.sh[3228]: Starting stage resolve-api-url Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.328579 2579 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="default/bootstrap-machine-config-operator-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain crio[2304]: time="2024-01-16 20:41:58.335547186Z" level=info msg="Running pod sandbox: default/bootstrap-machine-config-operator-localhost.localdomain/POD" id=792a7fdc-4d63-452e-afda-a3217fc2b819 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 20:41:58 localhost.localdomain crio[2304]: time="2024-01-16 20:41:58.336690696Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.353364 2579 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kni-infra/coredns-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain crio[2304]: time="2024-01-16 20:41:58.358273806Z" level=info msg="Running pod sandbox: openshift-kni-infra/coredns-localhost.localdomain/POD" id=efd68851-50c4-45ed-949a-e32a7ab88e97 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 20:41:58 localhost.localdomain crio[2304]: time="2024-01-16 20:41:58.358914110Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: I0116 20:41:58.403156 2579 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kni-infra/keepalived-localhost.localdomain" Jan 16 20:41:58 localhost.localdomain crio[2304]: time="2024-01-16 20:41:58.409313095Z" level=info msg="Running pod sandbox: openshift-kni-infra/keepalived-localhost.localdomain/POD" id=99eff721-1a0a-4689-84d6-05e536362e8e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 20:41:58 localhost.localdomain crio[2304]: time="2024-01-16 20:41:58.420306437Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: W0116 20:41:58.520825 2579 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fbf03b752412e8c829ad5b819ca09f0.slice/crio-917f47085fabfd7e7639a927d28cd2921f4061ce59b8f79a55177f4a5f77ad15 WatchSource:0}: Error finding container 917f47085fabfd7e7639a927d28cd2921f4061ce59b8f79a55177f4a5f77ad15: Status 404 returned error can't find the container with id 917f47085fabfd7e7639a927d28cd2921f4061ce59b8f79a55177f4a5f77ad15 Jan 16 20:41:58 localhost.localdomain crio[2304]: time="2024-01-16 20:41:58.529032551Z" level=info msg="Ran pod sandbox 917f47085fabfd7e7639a927d28cd2921f4061ce59b8f79a55177f4a5f77ad15 with infra container: openshift-kni-infra/coredns-localhost.localdomain/POD" id=efd68851-50c4-45ed-949a-e32a7ab88e97 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 20:41:58 localhost.localdomain crio[2304]: time="2024-01-16 20:41:58.538297169Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b233c7a0c0a218322c5d2fd5d17dc21db914bd49e84f46dd53aec042eb77d39d" id=2a232d16-fa41-40bf-8f31-0969aad8c106 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:41:58 localhost.localdomain crio[2304]: time="2024-01-16 20:41:58.540332006Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b233c7a0c0a218322c5d2fd5d17dc21db914bd49e84f46dd53aec042eb77d39d not found" id=2a232d16-fa41-40bf-8f31-0969aad8c106 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:41:58 localhost.localdomain crio[2304]: time="2024-01-16 20:41:58.545327390Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b233c7a0c0a218322c5d2fd5d17dc21db914bd49e84f46dd53aec042eb77d39d" id=9fc50305-8fd7-47e9-b33e-3be1fb80738d name=/runtime.v1.ImageService/PullImage Jan 16 20:41:58 localhost.localdomain kubelet.sh[2579]: W0116 20:41:58.555604 2579 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod543511857c8f22a7df82dd78b38d8f78.slice/crio-1d6d777cd9b78b08f87e3b2ebe4134ee01be22b298e1777527638ead992c85de WatchSource:0}: Error finding container 1d6d777cd9b78b08f87e3b2ebe4134ee01be22b298e1777527638ead992c85de: Status 404 returned error can't find the container with id 1d6d777cd9b78b08f87e3b2ebe4134ee01be22b298e1777527638ead992c85de Jan 16 20:41:58 localhost.localdomain crio[2304]: time="2024-01-16 20:41:58.565586284Z" level=info msg="Ran pod sandbox 7ebdc370e2c6148b8fcf32f4fc2cc95081bf61cd8d6252b3c4013c6ed54602ca with infra container: openshift-kni-infra/keepalived-localhost.localdomain/POD" id=99eff721-1a0a-4689-84d6-05e536362e8e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 20:41:58 localhost.localdomain crio[2304]: time="2024-01-16 20:41:58.565619536Z" level=info msg="Ran pod sandbox 1d6d777cd9b78b08f87e3b2ebe4134ee01be22b298e1777527638ead992c85de with infra container: default/bootstrap-machine-config-operator-localhost.localdomain/POD" id=792a7fdc-4d63-452e-afda-a3217fc2b819 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 20:41:58 localhost.localdomain crio[2304]: time="2024-01-16 20:41:58.566196551Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b233c7a0c0a218322c5d2fd5d17dc21db914bd49e84f46dd53aec042eb77d39d\"" Jan 16 20:41:58 localhost.localdomain crio[2304]: time="2024-01-16 20:41:58.582714858Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:594f2e4ef75bf8bfd342670ddd1d50bd97888671f13b8b566af8c568285de689" id=2c56e6ee-bf65-4ca9-9476-c8a1905b4014 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:41:58 localhost.localdomain crio[2304]: time="2024-01-16 20:41:58.584208932Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53888e519bbc048ac321065625e6bb450215810dcd1249fded533a47e028f066" id=ddc92ed0-42f5-49ef-9b68-3e529ab8798a name=/runtime.v1.ImageService/ImageStatus Jan 16 20:41:58 localhost.localdomain crio[2304]: time="2024-01-16 20:41:58.586224879Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:594f2e4ef75bf8bfd342670ddd1d50bd97888671f13b8b566af8c568285de689 not found" id=2c56e6ee-bf65-4ca9-9476-c8a1905b4014 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:41:58 localhost.localdomain crio[2304]: time="2024-01-16 20:41:58.592559003Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:71b61b393d8680f24798223813f23e033e2457d713d8771de9a5fe2a4da80e12,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53888e519bbc048ac321065625e6bb450215810dcd1249fded533a47e028f066],Size_:848145829,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=ddc92ed0-42f5-49ef-9b68-3e529ab8798a name=/runtime.v1.ImageService/ImageStatus Jan 16 20:41:58 localhost.localdomain crio[2304]: time="2024-01-16 20:41:58.594865830Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:594f2e4ef75bf8bfd342670ddd1d50bd97888671f13b8b566af8c568285de689" id=8e024fc9-33a0-41f8-b9f1-5af7dea42c41 name=/runtime.v1.ImageService/PullImage Jan 16 20:41:58 localhost.localdomain crio[2304]: time="2024-01-16 20:41:58.596158612Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53888e519bbc048ac321065625e6bb450215810dcd1249fded533a47e028f066" id=62f134f8-b918-4ac7-8332-a2be9505106d name=/runtime.v1.ImageService/ImageStatus Jan 16 20:41:58 localhost.localdomain crio[2304]: time="2024-01-16 20:41:58.605903108Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:71b61b393d8680f24798223813f23e033e2457d713d8771de9a5fe2a4da80e12,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53888e519bbc048ac321065625e6bb450215810dcd1249fded533a47e028f066],Size_:848145829,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=62f134f8-b918-4ac7-8332-a2be9505106d name=/runtime.v1.ImageService/ImageStatus Jan 16 20:41:58 localhost.localdomain crio[2304]: time="2024-01-16 20:41:58.608495462Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:594f2e4ef75bf8bfd342670ddd1d50bd97888671f13b8b566af8c568285de689\"" Jan 16 20:41:58 localhost.localdomain crio[2304]: time="2024-01-16 20:41:58.611468179Z" level=info msg="Creating container: default/bootstrap-machine-config-operator-localhost.localdomain/machine-config-controller" id=3745d017-a4f9-4ef8-b8a6-b9ee14ee1511 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:41:58 localhost.localdomain crio[2304]: time="2024-01-16 20:41:58.612305602Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:41:58 localhost.localdomain master-bmh-update.sh[7095]: E0116 20:41:58.822609 7095 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:41:58 localhost.localdomain master-bmh-update.sh[7095]: E0116 20:41:58.826147 7095 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:41:58 localhost.localdomain master-bmh-update.sh[7095]: E0116 20:41:58.829868 7095 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:41:58 localhost.localdomain master-bmh-update.sh[7095]: E0116 20:41:58.831578 7095 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:41:58 localhost.localdomain master-bmh-update.sh[7095]: E0116 20:41:58.833850 7095 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:41:58 localhost.localdomain master-bmh-update.sh[7095]: The connection to the server localhost:6443 was refused - did you specify the right host or port? Jan 16 20:41:58 localhost.localdomain master-bmh-update.sh[6528]: Waiting for BareMetalHosts to appear... Jan 16 20:41:59 localhost.localdomain systemd[1]: Started crio-conmon-2bb2928f4780a4ce6a586ea12fd06c56a16faf0ecccd797c47cbdfad37183c9c.scope. Jan 16 20:41:59 localhost.localdomain systemd[1]: Started libcontainer container 2bb2928f4780a4ce6a586ea12fd06c56a16faf0ecccd797c47cbdfad37183c9c. Jan 16 20:41:59 localhost.localdomain kubelet.sh[2579]: I0116 20:41:59.389389 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="default/bootstrap-machine-config-operator-localhost.localdomain" event=&{ID:543511857c8f22a7df82dd78b38d8f78 Type:ContainerStarted Data:1d6d777cd9b78b08f87e3b2ebe4134ee01be22b298e1777527638ead992c85de} Jan 16 20:41:59 localhost.localdomain kubelet.sh[2579]: I0116 20:41:59.395281 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-kni-infra/keepalived-localhost.localdomain" event=&{ID:f3cb0bd9c64889e06acccc1066e67828 Type:ContainerStarted Data:7ebdc370e2c6148b8fcf32f4fc2cc95081bf61cd8d6252b3c4013c6ed54602ca} Jan 16 20:41:59 localhost.localdomain kubelet.sh[2579]: I0116 20:41:59.398644 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-kni-infra/coredns-localhost.localdomain" event=&{ID:8fbf03b752412e8c829ad5b819ca09f0 Type:ContainerStarted Data:917f47085fabfd7e7639a927d28cd2921f4061ce59b8f79a55177f4a5f77ad15} Jan 16 20:41:59 localhost.localdomain crio[2304]: time="2024-01-16 20:41:59.546414155Z" level=info msg="Created container 2bb2928f4780a4ce6a586ea12fd06c56a16faf0ecccd797c47cbdfad37183c9c: default/bootstrap-machine-config-operator-localhost.localdomain/machine-config-controller" id=3745d017-a4f9-4ef8-b8a6-b9ee14ee1511 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:41:59 localhost.localdomain crio[2304]: time="2024-01-16 20:41:59.550518348Z" level=info msg="Starting container: 2bb2928f4780a4ce6a586ea12fd06c56a16faf0ecccd797c47cbdfad37183c9c" id=1c31a459-1f6c-45e0-8620-8d9a1afafa83 name=/runtime.v1.RuntimeService/StartContainer Jan 16 20:41:59 localhost.localdomain bootkube.sh[3228]: Successfully resolved API_URL api.lab.ocpipi.lan Jan 16 20:41:59 localhost.localdomain crio[2304]: time="2024-01-16 20:41:59.602883145Z" level=info msg="Started container" PID=7135 containerID=2bb2928f4780a4ce6a586ea12fd06c56a16faf0ecccd797c47cbdfad37183c9c description=default/bootstrap-machine-config-operator-localhost.localdomain/machine-config-controller id=1c31a459-1f6c-45e0-8620-8d9a1afafa83 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1d6d777cd9b78b08f87e3b2ebe4134ee01be22b298e1777527638ead992c85de Jan 16 20:41:59 localhost.localdomain bootkube.sh[3228]: Checking if api-int.lab.ocpipi.lan of type API_INT_URL is resolvable Jan 16 20:41:59 localhost.localdomain bootkube.sh[3228]: Starting stage resolve-api-int-url Jan 16 20:42:00 localhost.localdomain kubelet.sh[2579]: I0116 20:42:00.413345 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="default/bootstrap-machine-config-operator-localhost.localdomain" event=&{ID:543511857c8f22a7df82dd78b38d8f78 Type:ContainerStarted Data:2bb2928f4780a4ce6a586ea12fd06c56a16faf0ecccd797c47cbdfad37183c9c} Jan 16 20:42:00 localhost.localdomain kubelet.sh[2579]: I0116 20:42:00.414675 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:42:00 localhost.localdomain kubelet.sh[2579]: I0116 20:42:00.420675 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:42:00 localhost.localdomain kubelet.sh[2579]: I0116 20:42:00.420893 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:42:00 localhost.localdomain kubelet.sh[2579]: I0116 20:42:00.421152 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:42:00 localhost.localdomain crio[2304]: time="2024-01-16 20:42:00.724492595Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:594f2e4ef75bf8bfd342670ddd1d50bd97888671f13b8b566af8c568285de689\"" Jan 16 20:42:00 localhost.localdomain crio[2304]: time="2024-01-16 20:42:00.747515709Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b233c7a0c0a218322c5d2fd5d17dc21db914bd49e84f46dd53aec042eb77d39d\"" Jan 16 20:42:00 localhost.localdomain bootkube.sh[3228]: Unable to resolve API_INT_URL api-int.lab.ocpipi.lan Jan 16 20:42:01 localhost.localdomain bootkube.sh[3228]: Rendering CCO manifests... Jan 16 20:42:01 localhost.localdomain kubelet.sh[2579]: I0116 20:42:01.420266 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:42:01 localhost.localdomain kubelet.sh[2579]: I0116 20:42:01.426332 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:42:01 localhost.localdomain kubelet.sh[2579]: I0116 20:42:01.426522 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:42:01 localhost.localdomain kubelet.sh[2579]: I0116 20:42:01.426582 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:42:02 localhost.localdomain kubelet.sh[2579]: I0116 20:42:02.844571 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:42:02 localhost.localdomain kubelet.sh[2579]: I0116 20:42:02.850523 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:42:02 localhost.localdomain kubelet.sh[2579]: I0116 20:42:02.850836 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:42:02 localhost.localdomain kubelet.sh[2579]: I0116 20:42:02.850901 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:42:06 localhost.localdomain systemd[1]: crio-2bb2928f4780a4ce6a586ea12fd06c56a16faf0ecccd797c47cbdfad37183c9c.scope: Deactivated successfully. Jan 16 20:42:06 localhost.localdomain systemd[1]: crio-2bb2928f4780a4ce6a586ea12fd06c56a16faf0ecccd797c47cbdfad37183c9c.scope: Consumed 6.514s CPU time. Jan 16 20:42:06 localhost.localdomain systemd[1]: crio-conmon-2bb2928f4780a4ce6a586ea12fd06c56a16faf0ecccd797c47cbdfad37183c9c.scope: Deactivated successfully. Jan 16 20:42:06 localhost.localdomain kubelet.sh[2579]: I0116 20:42:06.503851 2579 generic.go:334] "Generic (PLEG): container finished" podID=543511857c8f22a7df82dd78b38d8f78 containerID="2bb2928f4780a4ce6a586ea12fd06c56a16faf0ecccd797c47cbdfad37183c9c" exitCode=0 Jan 16 20:42:06 localhost.localdomain kubelet.sh[2579]: I0116 20:42:06.504285 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="default/bootstrap-machine-config-operator-localhost.localdomain" event=&{ID:543511857c8f22a7df82dd78b38d8f78 Type:ContainerDied Data:2bb2928f4780a4ce6a586ea12fd06c56a16faf0ecccd797c47cbdfad37183c9c} Jan 16 20:42:06 localhost.localdomain kubelet.sh[2579]: I0116 20:42:06.505775 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:42:06 localhost.localdomain kubelet.sh[2579]: I0116 20:42:06.511370 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:42:06 localhost.localdomain kubelet.sh[2579]: I0116 20:42:06.511827 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:42:06 localhost.localdomain kubelet.sh[2579]: I0116 20:42:06.511910 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:42:06 localhost.localdomain crio[2304]: time="2024-01-16 20:42:06.514466457Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53888e519bbc048ac321065625e6bb450215810dcd1249fded533a47e028f066" id=6b1db12a-ad14-43e8-8928-85c2ce79edff name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:06 localhost.localdomain crio[2304]: time="2024-01-16 20:42:06.516690181Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:71b61b393d8680f24798223813f23e033e2457d713d8771de9a5fe2a4da80e12,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53888e519bbc048ac321065625e6bb450215810dcd1249fded533a47e028f066],Size_:848145829,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=6b1db12a-ad14-43e8-8928-85c2ce79edff name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:06 localhost.localdomain crio[2304]: time="2024-01-16 20:42:06.519090372Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53888e519bbc048ac321065625e6bb450215810dcd1249fded533a47e028f066" id=0a2de15e-1aa2-4f3f-bf60-fa625fb83868 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:06 localhost.localdomain crio[2304]: time="2024-01-16 20:42:06.519481654Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:71b61b393d8680f24798223813f23e033e2457d713d8771de9a5fe2a4da80e12,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53888e519bbc048ac321065625e6bb450215810dcd1249fded533a47e028f066],Size_:848145829,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=0a2de15e-1aa2-4f3f-bf60-fa625fb83868 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:06 localhost.localdomain crio[2304]: time="2024-01-16 20:42:06.523151815Z" level=info msg="Creating container: default/bootstrap-machine-config-operator-localhost.localdomain/machine-config-server" id=f6de9f3e-da7e-461f-869b-c5983999591e name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:42:06 localhost.localdomain crio[2304]: time="2024-01-16 20:42:06.523897381Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:42:07 localhost.localdomain systemd[1]: Started crio-conmon-8be1f1254288aa0972e76fce0712b009cce592d7de4a09c89e0c2538d1246d76.scope. Jan 16 20:42:07 localhost.localdomain systemd[1]: run-runc-8be1f1254288aa0972e76fce0712b009cce592d7de4a09c89e0c2538d1246d76-runc.TkG7cg.mount: Deactivated successfully. Jan 16 20:42:07 localhost.localdomain systemd[1]: Started libcontainer container 8be1f1254288aa0972e76fce0712b009cce592d7de4a09c89e0c2538d1246d76. Jan 16 20:42:07 localhost.localdomain crio[2304]: time="2024-01-16 20:42:07.773776022Z" level=info msg="Created container 8be1f1254288aa0972e76fce0712b009cce592d7de4a09c89e0c2538d1246d76: default/bootstrap-machine-config-operator-localhost.localdomain/machine-config-server" id=f6de9f3e-da7e-461f-869b-c5983999591e name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:42:07 localhost.localdomain crio[2304]: time="2024-01-16 20:42:07.778427415Z" level=info msg="Starting container: 8be1f1254288aa0972e76fce0712b009cce592d7de4a09c89e0c2538d1246d76" id=f3086450-784c-40be-994c-1501b7727edb name=/runtime.v1.RuntimeService/StartContainer Jan 16 20:42:07 localhost.localdomain crio[2304]: time="2024-01-16 20:42:07.853590184Z" level=info msg="Started container" PID=7252 containerID=8be1f1254288aa0972e76fce0712b009cce592d7de4a09c89e0c2538d1246d76 description=default/bootstrap-machine-config-operator-localhost.localdomain/machine-config-server id=f3086450-784c-40be-994c-1501b7727edb name=/runtime.v1.RuntimeService/StartContainer sandboxID=1d6d777cd9b78b08f87e3b2ebe4134ee01be22b298e1777527638ead992c85de Jan 16 20:42:08 localhost.localdomain kubelet.sh[2579]: I0116 20:42:08.529421 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="default/bootstrap-machine-config-operator-localhost.localdomain" event=&{ID:543511857c8f22a7df82dd78b38d8f78 Type:ContainerStarted Data:8be1f1254288aa0972e76fce0712b009cce592d7de4a09c89e0c2538d1246d76} Jan 16 20:42:08 localhost.localdomain kubelet.sh[2579]: I0116 20:42:08.530316 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:42:08 localhost.localdomain kubelet.sh[2579]: I0116 20:42:08.533051 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:42:08 localhost.localdomain kubelet.sh[2579]: I0116 20:42:08.533221 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:42:08 localhost.localdomain kubelet.sh[2579]: I0116 20:42:08.533315 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:42:12 localhost.localdomain kubelet.sh[2579]: I0116 20:42:12.982617 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:42:12 localhost.localdomain kubelet.sh[2579]: I0116 20:42:12.991510 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:42:12 localhost.localdomain kubelet.sh[2579]: I0116 20:42:12.992209 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:42:12 localhost.localdomain kubelet.sh[2579]: I0116 20:42:12.992552 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:42:14 localhost.localdomain sudo[7294]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps Jan 16 20:42:14 localhost.localdomain sudo[7294]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:42:15 localhost.localdomain approve-csr.sh[7290]: E0116 20:42:15.436791 7290 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:42:15 localhost.localdomain approve-csr.sh[7290]: E0116 20:42:15.441760 7290 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:42:15 localhost.localdomain approve-csr.sh[7290]: E0116 20:42:15.444358 7290 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:42:15 localhost.localdomain approve-csr.sh[7290]: E0116 20:42:15.446103 7290 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:42:15 localhost.localdomain approve-csr.sh[7290]: E0116 20:42:15.448247 7290 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:42:15 localhost.localdomain approve-csr.sh[7290]: The connection to the server localhost:6443 was refused - did you specify the right host or port? Jan 16 20:42:15 localhost.localdomain sudo[7294]: pam_unix(sudo:session): session closed for user root Jan 16 20:42:19 localhost.localdomain master-bmh-update.sh[7324]: E0116 20:42:19.360536 7324 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:42:19 localhost.localdomain master-bmh-update.sh[7324]: E0116 20:42:19.363629 7324 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:42:19 localhost.localdomain master-bmh-update.sh[7324]: E0116 20:42:19.366462 7324 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:42:19 localhost.localdomain master-bmh-update.sh[7324]: E0116 20:42:19.367786 7324 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:42:19 localhost.localdomain master-bmh-update.sh[7324]: E0116 20:42:19.368744 7324 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:42:19 localhost.localdomain master-bmh-update.sh[7324]: The connection to the server localhost:6443 was refused - did you specify the right host or port? Jan 16 20:42:19 localhost.localdomain master-bmh-update.sh[6528]: Waiting for BareMetalHosts to appear... Jan 16 20:42:23 localhost.localdomain kubelet.sh[2579]: I0116 20:42:23.060703 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:42:23 localhost.localdomain kubelet.sh[2579]: I0116 20:42:23.075185 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:42:23 localhost.localdomain kubelet.sh[2579]: I0116 20:42:23.076535 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:42:23 localhost.localdomain kubelet.sh[2579]: I0116 20:42:23.076879 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:42:25 localhost.localdomain crio[2304]: time="2024-01-16 20:42:25.676568777Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:594f2e4ef75bf8bfd342670ddd1d50bd97888671f13b8b566af8c568285de689" id=8e024fc9-33a0-41f8-b9f1-5af7dea42c41 name=/runtime.v1.ImageService/PullImage Jan 16 20:42:25 localhost.localdomain crio[2304]: time="2024-01-16 20:42:25.683892400Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:594f2e4ef75bf8bfd342670ddd1d50bd97888671f13b8b566af8c568285de689" id=2950f649-51c8-4f1d-af3d-670376369104 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:25 localhost.localdomain crio[2304]: time="2024-01-16 20:42:25.704476473Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:5cb5dd5856f0cbd66a3227a48d327384ad2ba615d2e9f2313428232427b8aeb7,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:594f2e4ef75bf8bfd342670ddd1d50bd97888671f13b8b566af8c568285de689],Size_:537687465,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=2950f649-51c8-4f1d-af3d-670376369104 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:25 localhost.localdomain crio[2304]: time="2024-01-16 20:42:25.711238747Z" level=info msg="Creating container: openshift-kni-infra/keepalived-localhost.localdomain/keepalived" id=70bed059-a19e-4b5d-bcd0-e950439240bb name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:42:25 localhost.localdomain crio[2304]: time="2024-01-16 20:42:25.712539513Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:42:25 localhost.localdomain systemd[1]: Started crio-conmon-d13d10ac5144cf00fa2285e8d563c64157c650aab6cb5bb4e9d90ec7528edcc8.scope. Jan 16 20:42:26 localhost.localdomain systemd[1]: run-runc-491f69964c18c0740e6169588a511b33d7dcb7c08a52fdf186730c64a4ab1ce2-runc.4ZscNM.mount: Deactivated successfully. Jan 16 20:42:26 localhost.localdomain systemd[1]: Started libcontainer container d13d10ac5144cf00fa2285e8d563c64157c650aab6cb5bb4e9d90ec7528edcc8. Jan 16 20:42:26 localhost.localdomain systemd[1]: Started libcontainer container 491f69964c18c0740e6169588a511b33d7dcb7c08a52fdf186730c64a4ab1ce2. Jan 16 20:42:26 localhost.localdomain crio[2304]: time="2024-01-16 20:42:26.278020031Z" level=info msg="Created container d13d10ac5144cf00fa2285e8d563c64157c650aab6cb5bb4e9d90ec7528edcc8: openshift-kni-infra/keepalived-localhost.localdomain/keepalived" id=70bed059-a19e-4b5d-bcd0-e950439240bb name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:42:26 localhost.localdomain crio[2304]: time="2024-01-16 20:42:26.281289905Z" level=info msg="Starting container: d13d10ac5144cf00fa2285e8d563c64157c650aab6cb5bb4e9d90ec7528edcc8" id=5024d0cd-c290-43c3-ba44-029b35f6f874 name=/runtime.v1.RuntimeService/StartContainer Jan 16 20:42:26 localhost.localdomain crio[2304]: time="2024-01-16 20:42:26.330367385Z" level=info msg="Started container" PID=7369 containerID=d13d10ac5144cf00fa2285e8d563c64157c650aab6cb5bb4e9d90ec7528edcc8 description=openshift-kni-infra/keepalived-localhost.localdomain/keepalived id=5024d0cd-c290-43c3-ba44-029b35f6f874 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7ebdc370e2c6148b8fcf32f4fc2cc95081bf61cd8d6252b3c4013c6ed54602ca Jan 16 20:42:26 localhost.localdomain crio[2304]: time="2024-01-16 20:42:26.395263241Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b233c7a0c0a218322c5d2fd5d17dc21db914bd49e84f46dd53aec042eb77d39d" id=dfb22c60-fd95-46d4-8d37-6b8db40c2a83 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:26 localhost.localdomain crio[2304]: time="2024-01-16 20:42:26.396240471Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b233c7a0c0a218322c5d2fd5d17dc21db914bd49e84f46dd53aec042eb77d39d not found" id=dfb22c60-fd95-46d4-8d37-6b8db40c2a83 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:26 localhost.localdomain crio[2304]: time="2024-01-16 20:42:26.406569352Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b233c7a0c0a218322c5d2fd5d17dc21db914bd49e84f46dd53aec042eb77d39d" id=5ac28108-8384-4a9c-beba-abf544d9458e name=/runtime.v1.ImageService/PullImage Jan 16 20:42:26 localhost.localdomain crio[2304]: time="2024-01-16 20:42:26.416799174Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b233c7a0c0a218322c5d2fd5d17dc21db914bd49e84f46dd53aec042eb77d39d\"" Jan 16 20:42:26 localhost.localdomain kubelet.sh[2579]: I0116 20:42:26.651545 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-kni-infra/keepalived-localhost.localdomain" event=&{ID:f3cb0bd9c64889e06acccc1066e67828 Type:ContainerStarted Data:d13d10ac5144cf00fa2285e8d563c64157c650aab6cb5bb4e9d90ec7528edcc8} Jan 16 20:42:26 localhost.localdomain bootkube.sh[7202]: time="2024-01-16T20:42:26Z" level=info msg="Rendering files to /assets/cco-bootstrap" Jan 16 20:42:26 localhost.localdomain bootkube.sh[7202]: time="2024-01-16T20:42:26Z" level=info msg="Writing file: /assets/cco-bootstrap/manifests/cco-cloudcredential_v1_operator_config_custresdef.yaml" Jan 16 20:42:26 localhost.localdomain bootkube.sh[7202]: time="2024-01-16T20:42:26Z" level=info msg="Writing file: /assets/cco-bootstrap/manifests/cco-cloudcredential_v1_credentialsrequest_crd.yaml" Jan 16 20:42:26 localhost.localdomain bootkube.sh[7202]: time="2024-01-16T20:42:26Z" level=info msg="Writing file: /assets/cco-bootstrap/manifests/cco-namespace.yaml" Jan 16 20:42:26 localhost.localdomain bootkube.sh[7202]: time="2024-01-16T20:42:26Z" level=info msg="Writing file: /assets/cco-bootstrap/manifests/cco-operator-config.yaml" Jan 16 20:42:26 localhost.localdomain bootkube.sh[7202]: time="2024-01-16T20:42:26Z" level=info msg="Rendering static pod" Jan 16 20:42:26 localhost.localdomain bootkube.sh[7202]: time="2024-01-16T20:42:26Z" level=info msg="writing file: /assets/cco-bootstrap/bootstrap-manifests/cloud-credential-operator-pod.yaml" Jan 16 20:42:26 localhost.localdomain systemd[1]: libpod-491f69964c18c0740e6169588a511b33d7dcb7c08a52fdf186730c64a4ab1ce2.scope: Deactivated successfully. Jan 16 20:42:27 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-9e81131abf184b1bf6d33f5a17de61d5664ad1ea5ee65fc358b3f781962a79be-merged.mount: Deactivated successfully. Jan 16 20:42:27 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-491f69964c18c0740e6169588a511b33d7dcb7c08a52fdf186730c64a4ab1ce2-userdata-shm.mount: Deactivated successfully. Jan 16 20:42:27 localhost.localdomain systemd[1]: Started libcontainer container e7cab49aba9fb255d0c668e865c304c8216fa1e7472a624fe03edc386d34283c. Jan 16 20:42:28 localhost.localdomain crio[2304]: time="2024-01-16 20:42:28.480541524Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b233c7a0c0a218322c5d2fd5d17dc21db914bd49e84f46dd53aec042eb77d39d\"" Jan 16 20:42:28 localhost.localdomain bootkube.sh[7445]: https://localhost:2379 is healthy: successfully committed proposal: took = 35.058669ms Jan 16 20:42:28 localhost.localdomain systemd[1]: libpod-e7cab49aba9fb255d0c668e865c304c8216fa1e7472a624fe03edc386d34283c.scope: Deactivated successfully. Jan 16 20:42:33 localhost.localdomain kubelet.sh[2579]: I0116 20:42:33.216515 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:42:33 localhost.localdomain kubelet.sh[2579]: I0116 20:42:33.233435 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:42:33 localhost.localdomain kubelet.sh[2579]: I0116 20:42:33.233849 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:42:33 localhost.localdomain kubelet.sh[2579]: I0116 20:42:33.234085 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:42:35 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e7cab49aba9fb255d0c668e865c304c8216fa1e7472a624fe03edc386d34283c-userdata-shm.mount: Deactivated successfully. Jan 16 20:42:35 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-ac4511b3dc5ab49d60b523333bd9986d9fd1b184fd6ccd655693b9cd4a9745f0-merged.mount: Deactivated successfully. Jan 16 20:42:35 localhost.localdomain crio[2304]: time="2024-01-16 20:42:35.916448812Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b233c7a0c0a218322c5d2fd5d17dc21db914bd49e84f46dd53aec042eb77d39d" id=5ac28108-8384-4a9c-beba-abf544d9458e name=/runtime.v1.ImageService/PullImage Jan 16 20:42:35 localhost.localdomain crio[2304]: time="2024-01-16 20:42:35.922260521Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b233c7a0c0a218322c5d2fd5d17dc21db914bd49e84f46dd53aec042eb77d39d" id=9bae8351-b8e4-45b4-8ec2-f9faacd3554c name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:35 localhost.localdomain crio[2304]: time="2024-01-16 20:42:35.929343505Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a86afd22a7cf3d4ab5bad64f333a5759eaa087500f4642d2edc18a59b1bdbdd9,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b233c7a0c0a218322c5d2fd5d17dc21db914bd49e84f46dd53aec042eb77d39d],Size_:759621966,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=9bae8351-b8e4-45b4-8ec2-f9faacd3554c name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:35 localhost.localdomain crio[2304]: time="2024-01-16 20:42:35.937119318Z" level=info msg="Creating container: openshift-kni-infra/keepalived-localhost.localdomain/keepalived-monitor" id=47b4a50d-4def-4ffb-8877-8b25ef98387b name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:42:35 localhost.localdomain crio[2304]: time="2024-01-16 20:42:35.938470687Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:42:36 localhost.localdomain crio[2304]: time="2024-01-16 20:42:36.051098320Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b233c7a0c0a218322c5d2fd5d17dc21db914bd49e84f46dd53aec042eb77d39d" id=9fc50305-8fd7-47e9-b33e-3be1fb80738d name=/runtime.v1.ImageService/PullImage Jan 16 20:42:36 localhost.localdomain crio[2304]: time="2024-01-16 20:42:36.055360806Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b233c7a0c0a218322c5d2fd5d17dc21db914bd49e84f46dd53aec042eb77d39d" id=77647676-1bd0-4879-bce7-b37101a77a1c name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:36 localhost.localdomain crio[2304]: time="2024-01-16 20:42:36.077314927Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a86afd22a7cf3d4ab5bad64f333a5759eaa087500f4642d2edc18a59b1bdbdd9,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b233c7a0c0a218322c5d2fd5d17dc21db914bd49e84f46dd53aec042eb77d39d],Size_:759621966,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=77647676-1bd0-4879-bce7-b37101a77a1c name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:36 localhost.localdomain crio[2304]: time="2024-01-16 20:42:36.081872748Z" level=info msg="Creating container: openshift-kni-infra/coredns-localhost.localdomain/render-config" id=42c8fa6f-8171-4ad3-9f30-9e351fa9b95b name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:42:36 localhost.localdomain crio[2304]: time="2024-01-16 20:42:36.082574606Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:42:36 localhost.localdomain bootkube.sh[3228]: Starting cluster-bootstrap... Jan 16 20:42:36 localhost.localdomain approve-csr.sh[7523]: E0116 20:42:36.369898 7523 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:42:36 localhost.localdomain approve-csr.sh[7523]: E0116 20:42:36.374523 7523 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:42:36 localhost.localdomain approve-csr.sh[7523]: E0116 20:42:36.376744 7523 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:42:36 localhost.localdomain approve-csr.sh[7523]: E0116 20:42:36.384882 7523 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:42:36 localhost.localdomain approve-csr.sh[7523]: E0116 20:42:36.397393 7523 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:42:36 localhost.localdomain approve-csr.sh[7523]: The connection to the server localhost:6443 was refused - did you specify the right host or port? Jan 16 20:42:36 localhost.localdomain systemd[1]: Started crio-conmon-27f0a69b5a1170662ecdcb22b60df84ee82dc8b43f39e64495dfc15c1553e58b.scope. Jan 16 20:42:36 localhost.localdomain systemd[1]: Started crio-conmon-15bd4a023d4b8b33230729b8f76441bd77247153171cfb31a44627de58b29f88.scope. Jan 16 20:42:36 localhost.localdomain systemd[1]: Started libcontainer container 27f0a69b5a1170662ecdcb22b60df84ee82dc8b43f39e64495dfc15c1553e58b. Jan 16 20:42:36 localhost.localdomain systemd[1]: Started libcontainer container 15bd4a023d4b8b33230729b8f76441bd77247153171cfb31a44627de58b29f88. Jan 16 20:42:37 localhost.localdomain crio[2304]: time="2024-01-16 20:42:37.063896922Z" level=info msg="Created container 27f0a69b5a1170662ecdcb22b60df84ee82dc8b43f39e64495dfc15c1553e58b: openshift-kni-infra/keepalived-localhost.localdomain/keepalived-monitor" id=47b4a50d-4def-4ffb-8877-8b25ef98387b name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:42:37 localhost.localdomain crio[2304]: time="2024-01-16 20:42:37.066811517Z" level=info msg="Starting container: 27f0a69b5a1170662ecdcb22b60df84ee82dc8b43f39e64495dfc15c1553e58b" id=24624c91-45ae-48ae-99cd-7a47ee80a29b name=/runtime.v1.RuntimeService/StartContainer Jan 16 20:42:37 localhost.localdomain crio[2304]: time="2024-01-16 20:42:37.087253211Z" level=info msg="Created container 15bd4a023d4b8b33230729b8f76441bd77247153171cfb31a44627de58b29f88: openshift-kni-infra/coredns-localhost.localdomain/render-config" id=42c8fa6f-8171-4ad3-9f30-9e351fa9b95b name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:42:37 localhost.localdomain crio[2304]: time="2024-01-16 20:42:37.089434695Z" level=info msg="Starting container: 15bd4a023d4b8b33230729b8f76441bd77247153171cfb31a44627de58b29f88" id=7a8c4b9c-6427-43b1-be56-59cb35abf8b0 name=/runtime.v1.RuntimeService/StartContainer Jan 16 20:42:37 localhost.localdomain crio[2304]: time="2024-01-16 20:42:37.099567980Z" level=info msg="Started container" PID=7603 containerID=27f0a69b5a1170662ecdcb22b60df84ee82dc8b43f39e64495dfc15c1553e58b description=openshift-kni-infra/keepalived-localhost.localdomain/keepalived-monitor id=24624c91-45ae-48ae-99cd-7a47ee80a29b name=/runtime.v1.RuntimeService/StartContainer sandboxID=7ebdc370e2c6148b8fcf32f4fc2cc95081bf61cd8d6252b3c4013c6ed54602ca Jan 16 20:42:37 localhost.localdomain crio[2304]: time="2024-01-16 20:42:37.146776327Z" level=info msg="Started container" PID=7610 containerID=15bd4a023d4b8b33230729b8f76441bd77247153171cfb31a44627de58b29f88 description=openshift-kni-infra/coredns-localhost.localdomain/render-config id=7a8c4b9c-6427-43b1-be56-59cb35abf8b0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=917f47085fabfd7e7639a927d28cd2921f4061ce59b8f79a55177f4a5f77ad15 Jan 16 20:42:37 localhost.localdomain systemd[1]: crio-15bd4a023d4b8b33230729b8f76441bd77247153171cfb31a44627de58b29f88.scope: Deactivated successfully. Jan 16 20:42:37 localhost.localdomain systemd[1]: crio-conmon-15bd4a023d4b8b33230729b8f76441bd77247153171cfb31a44627de58b29f88.scope: Deactivated successfully. Jan 16 20:42:37 localhost.localdomain kubelet.sh[2579]: I0116 20:42:37.742895 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-kni-infra/keepalived-localhost.localdomain" event=&{ID:f3cb0bd9c64889e06acccc1066e67828 Type:ContainerStarted Data:27f0a69b5a1170662ecdcb22b60df84ee82dc8b43f39e64495dfc15c1553e58b} Jan 16 20:42:37 localhost.localdomain kubelet.sh[2579]: I0116 20:42:37.744247 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:42:37 localhost.localdomain kubelet.sh[2579]: I0116 20:42:37.760908 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:42:37 localhost.localdomain kubelet.sh[2579]: I0116 20:42:37.761174 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:42:37 localhost.localdomain kubelet.sh[2579]: I0116 20:42:37.761227 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:42:37 localhost.localdomain kubelet.sh[2579]: I0116 20:42:37.765167 2579 generic.go:334] "Generic (PLEG): container finished" podID=8fbf03b752412e8c829ad5b819ca09f0 containerID="15bd4a023d4b8b33230729b8f76441bd77247153171cfb31a44627de58b29f88" exitCode=0 Jan 16 20:42:37 localhost.localdomain kubelet.sh[2579]: I0116 20:42:37.765369 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-kni-infra/coredns-localhost.localdomain" event=&{ID:8fbf03b752412e8c829ad5b819ca09f0 Type:ContainerDied Data:15bd4a023d4b8b33230729b8f76441bd77247153171cfb31a44627de58b29f88} Jan 16 20:42:37 localhost.localdomain kubelet.sh[2579]: I0116 20:42:37.766325 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:42:37 localhost.localdomain kubelet.sh[2579]: I0116 20:42:37.773390 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:42:37 localhost.localdomain kubelet.sh[2579]: I0116 20:42:37.774178 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:42:37 localhost.localdomain kubelet.sh[2579]: I0116 20:42:37.774718 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:42:37 localhost.localdomain crio[2304]: time="2024-01-16 20:42:37.777373777Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8861e361b2edf572c1973d0646cba4e4396d28dcd16a8ca4250c5c9eeb5a9069" id=7006e32a-3aa3-4802-b371-fa70010b1372 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:37 localhost.localdomain crio[2304]: time="2024-01-16 20:42:37.778521353Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8861e361b2edf572c1973d0646cba4e4396d28dcd16a8ca4250c5c9eeb5a9069 not found" id=7006e32a-3aa3-4802-b371-fa70010b1372 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:37 localhost.localdomain crio[2304]: time="2024-01-16 20:42:37.780833926Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8861e361b2edf572c1973d0646cba4e4396d28dcd16a8ca4250c5c9eeb5a9069" id=8a3734b6-75be-4f46-a5a9-4b19ffd14eca name=/runtime.v1.ImageService/PullImage Jan 16 20:42:37 localhost.localdomain crio[2304]: time="2024-01-16 20:42:37.792402645Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8861e361b2edf572c1973d0646cba4e4396d28dcd16a8ca4250c5c9eeb5a9069\"" Jan 16 20:42:38 localhost.localdomain kubelet.sh[2579]: I0116 20:42:38.775098 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:42:38 localhost.localdomain kubelet.sh[2579]: I0116 20:42:38.785077 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:42:38 localhost.localdomain kubelet.sh[2579]: I0116 20:42:38.785487 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:42:38 localhost.localdomain kubelet.sh[2579]: I0116 20:42:38.785572 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:42:39 localhost.localdomain crio[2304]: time="2024-01-16 20:42:39.949216640Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8861e361b2edf572c1973d0646cba4e4396d28dcd16a8ca4250c5c9eeb5a9069\"" Jan 16 20:42:40 localhost.localdomain master-bmh-update.sh[7691]: E0116 20:42:40.012236 7691 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:42:40 localhost.localdomain master-bmh-update.sh[7691]: E0116 20:42:40.014833 7691 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:42:40 localhost.localdomain master-bmh-update.sh[7691]: E0116 20:42:40.018700 7691 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:42:40 localhost.localdomain master-bmh-update.sh[7691]: E0116 20:42:40.022883 7691 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:42:40 localhost.localdomain master-bmh-update.sh[7691]: E0116 20:42:40.027434 7691 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:42:40 localhost.localdomain master-bmh-update.sh[7691]: The connection to the server localhost:6443 was refused - did you specify the right host or port? Jan 16 20:42:40 localhost.localdomain master-bmh-update.sh[6528]: Waiting for BareMetalHosts to appear... Jan 16 20:42:43 localhost.localdomain kubelet.sh[2579]: I0116 20:42:43.712201 2579 kubelet.go:1486] "Image garbage collection succeeded" Jan 16 20:42:43 localhost.localdomain kubelet.sh[2579]: I0116 20:42:43.729757 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 20:42:43 localhost.localdomain kubelet.sh[2579]: I0116 20:42:43.730125 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Pending Jan 16 20:42:43 localhost.localdomain kubelet.sh[2579]: I0116 20:42:43.730232 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 20:42:43 localhost.localdomain kubelet.sh[2579]: I0116 20:42:43.730296 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 20:42:43 localhost.localdomain crio[2304]: time="2024-01-16 20:42:43.812655878Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcc1d762ed74e1eb6027355a2e6cc3933bd7b35cee9d6235de0fbe2d2958b0c2" id=225fd9fa-53af-4da8-800b-981acc908915 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:43 localhost.localdomain crio[2304]: time="2024-01-16 20:42:43.813258957Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a5beb712367dd5020b5a7b99c2ffbfcd91d3c6c425625d5cc816f58cf145564f,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcc1d762ed74e1eb6027355a2e6cc3933bd7b35cee9d6235de0fbe2d2958b0c2],Size_:448590957,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=225fd9fa-53af-4da8-800b-981acc908915 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:45 localhost.localdomain kubelet.sh[2579]: I0116 20:42:45.689284 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:42:45 localhost.localdomain kubelet.sh[2579]: I0116 20:42:45.698262 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:42:45 localhost.localdomain kubelet.sh[2579]: I0116 20:42:45.698431 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:42:45 localhost.localdomain kubelet.sh[2579]: I0116 20:42:45.698475 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:42:48 localhost.localdomain kubelet.sh[2579]: I0116 20:42:48.606147 2579 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kni-infra/keepalived-localhost.localdomain" podUID=f3cb0bd9c64889e06acccc1066e67828 containerName="keepalived" probeResult=failure output=< Jan 16 20:42:48 localhost.localdomain kubelet.sh[2579]: /bin/bash: line 2: kill: `': not a pid or valid job spec Jan 16 20:42:48 localhost.localdomain kubelet.sh[2579]: > Jan 16 20:42:49 localhost.localdomain crio[2304]: time="2024-01-16 20:42:49.149731036Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8861e361b2edf572c1973d0646cba4e4396d28dcd16a8ca4250c5c9eeb5a9069" id=8a3734b6-75be-4f46-a5a9-4b19ffd14eca name=/runtime.v1.ImageService/PullImage Jan 16 20:42:49 localhost.localdomain crio[2304]: time="2024-01-16 20:42:49.156192622Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8861e361b2edf572c1973d0646cba4e4396d28dcd16a8ca4250c5c9eeb5a9069" id=7e83df2e-50f9-45e5-857d-9eabe604d7d9 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:49 localhost.localdomain crio[2304]: time="2024-01-16 20:42:49.162829392Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d3ac90001f220cdc645a143b3c256c39973fedc56577bc834bde478cc6686d38,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8861e361b2edf572c1973d0646cba4e4396d28dcd16a8ca4250c5c9eeb5a9069],Size_:521081835,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=7e83df2e-50f9-45e5-857d-9eabe604d7d9 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:49 localhost.localdomain crio[2304]: time="2024-01-16 20:42:49.169160846Z" level=info msg="Creating container: openshift-kni-infra/coredns-localhost.localdomain/coredns" id=f26762d3-41b7-41d4-9c48-389fe136571f name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:42:49 localhost.localdomain crio[2304]: time="2024-01-16 20:42:49.169865310Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:42:49 localhost.localdomain kubelet.sh[2579]: E0116 20:42:49.356498 2579 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests\": read /etc/kubernetes/manifests: is a directory" Jan 16 20:42:49 localhost.localdomain systemd[1]: run-runc-23a7dbcb3283acf03eafcf5c8d7e5b76ba821720482533edd6603732aefc2915-runc.5Xs8NG.mount: Deactivated successfully. Jan 16 20:42:49 localhost.localdomain systemd[1]: Started libcontainer container 23a7dbcb3283acf03eafcf5c8d7e5b76ba821720482533edd6603732aefc2915. Jan 16 20:42:49 localhost.localdomain systemd[1]: Started crio-conmon-5ad86afb32109f303c2cdedf57d80e1846b7f7664e3806c1ad8ebc5282b6c07b.scope. Jan 16 20:42:49 localhost.localdomain bootkube.sh[7556]: Starting temporary bootstrap control plane... Jan 16 20:42:50 localhost.localdomain systemd[1]: run-runc-5ad86afb32109f303c2cdedf57d80e1846b7f7664e3806c1ad8ebc5282b6c07b-runc.gHAjbM.mount: Deactivated successfully. Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: E0116 20:42:50.013792 2579 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/bootstrap-pod.yaml\": /etc/kubernetes/manifests/bootstrap-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Jan 16 20:42:50 localhost.localdomain bootkube.sh[7556]: Waiting up to 20m0s for the Kubernetes API Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.026882 2579 kubelet.go:2425] "SyncLoop ADD" source="file" pods=[openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain] Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.027319 2579 topology_manager.go:212] "Topology Admit Handler" podUID=05c96ce8daffad47cf2b15e2a67753ec podNamespace="openshift-cluster-version" podName="bootstrap-cluster-version-operator-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.027700 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:42:50 localhost.localdomain systemd[1]: Started libcontainer container 5ad86afb32109f303c2cdedf57d80e1846b7f7664e3806c1ad8ebc5282b6c07b. Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.035271 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.035439 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.035527 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.036363 2579 kubelet.go:2425] "SyncLoop ADD" source="file" pods=[openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain] Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.036525 2579 topology_manager.go:212] "Topology Admit Handler" podUID=a6238b9f1f3a2f2bd2b4b1b0c7962bdd podNamespace="openshift-cloud-credential-operator" podName="cloud-credential-operator-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.036736 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.054376 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.054460 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.054502 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.054850 2579 kubelet.go:2425] "SyncLoop ADD" source="file" pods=[openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain] Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.055067 2579 topology_manager.go:212] "Topology Admit Handler" podUID=1cb3be1f2df5273e9b77f7050777bcbe podNamespace="openshift-kube-apiserver" podName="bootstrap-kube-apiserver-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.055291 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.072863 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.073309 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.073363 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.079664 2579 kubelet.go:2425] "SyncLoop ADD" source="file" pods=[kube-system/bootstrap-kube-controller-manager-localhost.localdomain] Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.079749 2579 topology_manager.go:212] "Topology Admit Handler" podUID=c3db590e56a311b869092b2d6b1724e5 podNamespace="kube-system" podName="bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.080160 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.086472 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.086849 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.087499 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.099098 2579 kubelet.go:2425] "SyncLoop ADD" source="file" pods=[kube-system/bootstrap-kube-scheduler-localhost.localdomain] Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.099267 2579 topology_manager.go:212] "Topology Admit Handler" podUID=b8b0f2012ce2b145220be181d7a5aa55 podNamespace="kube-system" podName="bootstrap-kube-scheduler-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.099401 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:42:50 localhost.localdomain systemd[1]: Created slice libcontainer container kubepods-besteffort-pod05c96ce8daffad47cf2b15e2a67753ec.slice. Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.107659 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.107798 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.107841 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.152267 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.156261 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.156320 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.156364 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.161081 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-kubeconfig\") pod \"bootstrap-cluster-version-operator-localhost.localdomain\" (UID: \"05c96ce8daffad47cf2b15e2a67753ec\") " pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.162135 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-etc-ssl-certs\") pod \"bootstrap-cluster-version-operator-localhost.localdomain\" (UID: \"05c96ce8daffad47cf2b15e2a67753ec\") " pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain systemd[1]: Created slice libcontainer container kubepods-besteffort-poda6238b9f1f3a2f2bd2b4b1b0c7962bdd.slice. Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.213552 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.218229 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.218298 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.218335 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:42:50 localhost.localdomain systemd[1]: Created slice libcontainer container kubepods-burstable-pod1cb3be1f2df5273e9b77f7050777bcbe.slice. Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.265901 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.268355345Z" level=info msg="Created container 5ad86afb32109f303c2cdedf57d80e1846b7f7664e3806c1ad8ebc5282b6c07b: openshift-kni-infra/coredns-localhost.localdomain/coredns" id=f26762d3-41b7-41d4-9c48-389fe136571f name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.268555 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-secrets\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.269138 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a6238b9f1f3a2f2bd2b4b1b0c7962bdd-secrets\") pod \"cloud-credential-operator-localhost.localdomain\" (UID: \"a6238b9f1f3a2f2bd2b4b1b0c7962bdd\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.269321 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-audit-dir\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.269413 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-config\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.270103 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-kubeconfig\") pod \"bootstrap-cluster-version-operator-localhost.localdomain\" (UID: \"05c96ce8daffad47cf2b15e2a67753ec\") " pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.270230 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-config\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.270314 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-kubeconfig\") pod \"bootstrap-cluster-version-operator-localhost.localdomain\" (UID: \"05c96ce8daffad47cf2b15e2a67753ec\") " pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.270431 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-etc-ssl-certs\") pod \"bootstrap-cluster-version-operator-localhost.localdomain\" (UID: \"05c96ce8daffad47cf2b15e2a67753ec\") " pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.270323 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-etc-ssl-certs\") pod \"bootstrap-cluster-version-operator-localhost.localdomain\" (UID: \"05c96ce8daffad47cf2b15e2a67753ec\") " pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.270676 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.270841 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-logs\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.274363 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-ssl-certs-host\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.274719 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-secrets\") pod \"bootstrap-kube-scheduler-localhost.localdomain\" (UID: \"b8b0f2012ce2b145220be181d7a5aa55\") " pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.274818 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-logs\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.274889 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-logs\") pod \"bootstrap-kube-scheduler-localhost.localdomain\" (UID: \"b8b0f2012ce2b145220be181d7a5aa55\") " pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.275121 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-secrets\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.275224 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.275391 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain systemd[1]: Created slice libcontainer container kubepods-burstable-podc3db590e56a311b869092b2d6b1724e5.slice. Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.279077819Z" level=info msg="Starting container: 5ad86afb32109f303c2cdedf57d80e1846b7f7664e3806c1ad8ebc5282b6c07b" id=75a8f2a3-7c0b-4027-8ab1-834376fea227 name=/runtime.v1.RuntimeService/StartContainer Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.281441 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.281665 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.281716 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.300666 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.305290 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.305709 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.305776 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.321508201Z" level=info msg="Started container" PID=7800 containerID=5ad86afb32109f303c2cdedf57d80e1846b7f7664e3806c1ad8ebc5282b6c07b description=openshift-kni-infra/coredns-localhost.localdomain/coredns id=75a8f2a3-7c0b-4027-8ab1-834376fea227 name=/runtime.v1.RuntimeService/StartContainer sandboxID=917f47085fabfd7e7639a927d28cd2921f4061ce59b8f79a55177f4a5f77ad15 Jan 16 20:42:50 localhost.localdomain systemd[1]: Created slice libcontainer container kubepods-burstable-podb8b0f2012ce2b145220be181d7a5aa55.slice. Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.357080 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.367191 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.367273 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.367314 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.376291 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.376527 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-logs\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.376750 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-ssl-certs-host\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.376840 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-secrets\") pod \"bootstrap-kube-scheduler-localhost.localdomain\" (UID: \"b8b0f2012ce2b145220be181d7a5aa55\") " pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.377070 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-logs\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.377161 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-logs\") pod \"bootstrap-kube-scheduler-localhost.localdomain\" (UID: \"b8b0f2012ce2b145220be181d7a5aa55\") " pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.377274 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-secrets\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.377702 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.377797 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.377880 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-secrets\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.378125 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a6238b9f1f3a2f2bd2b4b1b0c7962bdd-secrets\") pod \"cloud-credential-operator-localhost.localdomain\" (UID: \"a6238b9f1f3a2f2bd2b4b1b0c7962bdd\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.378210 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-audit-dir\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.378285 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-config\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.378363 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-config\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.378744 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-config\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.378878 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-logs\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.379149 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-secrets\") pod \"bootstrap-kube-scheduler-localhost.localdomain\" (UID: \"b8b0f2012ce2b145220be181d7a5aa55\") " pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.379365 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-logs\") pod \"bootstrap-kube-scheduler-localhost.localdomain\" (UID: \"b8b0f2012ce2b145220be181d7a5aa55\") " pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.379502 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-secrets\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.379773 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.379904 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.380217 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-secrets\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.380356 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a6238b9f1f3a2f2bd2b4b1b0c7962bdd-secrets\") pod \"cloud-credential-operator-localhost.localdomain\" (UID: \"a6238b9f1f3a2f2bd2b4b1b0c7962bdd\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.380723 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-audit-dir\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.380811 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.381121 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-logs\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.381234 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-ssl-certs-host\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.380852 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-config\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.458098 2579 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.461125609Z" level=info msg="Running pod sandbox: openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain/POD" id=bd39a634-2fc1-4865-95d9-66e306f59488 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.461382612Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.521347 2579 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.523361215Z" level=info msg="Running pod sandbox: openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain/POD" id=f438687b-82bc-46e5-84f2-eefb8f2a4f2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.523703368Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.534419118Z" level=info msg="Ran pod sandbox 70686be8a2d87683a00828f4233d059638689db262cbef7d341c1f46aeb3fd09 with infra container: openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain/POD" id=bd39a634-2fc1-4865-95d9-66e306f59488 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.540259486Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa" id=ef01fa92-c185-4b7f-b152-00ef91dcfc38 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.550889191Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:40e15091a793905eb63a02d951105fc5c5904bfb294f8004c052ac950c9ac44a,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa],Size_:522846560,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=ef01fa92-c185-4b7f-b152-00ef91dcfc38 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.558427059Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa" id=b5b87587-40f3-4cf4-9cea-f3fe64cd84cd name=/runtime.v1.ImageService/PullImage Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.579819634Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa\"" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.584129 2579 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.592246558Z" level=info msg="Ran pod sandbox 26024c8016ef3e2119dd507f560533c94af57eb36863fae575a12ac36b7c6b00 with infra container: openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain/POD" id=f438687b-82bc-46e5-84f2-eefb8f2a4f2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.594858667Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1cccbc92c83dd170dea8cb72a09e96facba21f3fdf5e3dd3f3009796c481cd67" id=b492438c-20bb-49d9-a9af-9462f779a8e0 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.595555554Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:90bdc1613647030f9fe768ad330e8ff0dca1cc04bf002dc32974238943125b9c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1cccbc92c83dd170dea8cb72a09e96facba21f3fdf5e3dd3f3009796c481cd67],Size_:704416475,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=b492438c-20bb-49d9-a9af-9462f779a8e0 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.607628 2579 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.594903363Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/POD" id=fae2ff52-f08d-49e0-88b9-34d2a07994eb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.610785412Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.597804214Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1cccbc92c83dd170dea8cb72a09e96facba21f3fdf5e3dd3f3009796c481cd67" id=777c3911-6c0d-47a1-81ea-fdb16de6cdc4 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.612304685Z" level=info msg="Running pod sandbox: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/POD" id=c33d95fc-f7e2-44ae-a090-3ce6f6ab3290 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.612541543Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.613145947Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:90bdc1613647030f9fe768ad330e8ff0dca1cc04bf002dc32974238943125b9c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1cccbc92c83dd170dea8cb72a09e96facba21f3fdf5e3dd3f3009796c481cd67],Size_:704416475,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=777c3911-6c0d-47a1-81ea-fdb16de6cdc4 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.616302688Z" level=info msg="Creating container: openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain/cloud-credential-operator" id=b9cc0dcc-16a3-4420-bd32-6e75e9a36ca5 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.617191220Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.661647503Z" level=info msg="Ran pod sandbox 79c10015fd162b8e62ecb33ebeccbd5e476b9a518fb7eb7c00b519d5bb0eb934 with infra container: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/POD" id=c33d95fc-f7e2-44ae-a090-3ce6f6ab3290 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.668893 2579 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.669356321Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=f43f2fa5-8468-4465-96d9-0b90a565b230 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.670125593Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1 not found" id=f43f2fa5-8468-4465-96d9-0b90a565b230 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.670746675Z" level=info msg="Running pod sandbox: kube-system/bootstrap-kube-scheduler-localhost.localdomain/POD" id=36899efb-286e-47c4-9ae3-f2db60881092 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.671044084Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.674328752Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=cea10e8c-cf05-4faf-bfce-d7b0e17753db name=/runtime.v1.ImageService/PullImage Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.686441216Z" level=info msg="Ran pod sandbox ad4d9c6ed5d6ab8a2a9b57904014b7c602165d8b2f56808bf0a162e61ca5e05d with infra container: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/POD" id=fae2ff52-f08d-49e0-88b9-34d2a07994eb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.695160148Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1\"" Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.701793646Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=c6ebbd42-91d7-4cd8-bfd6-90c4ad866f20 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.702726172Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1 not found" id=c6ebbd42-91d7-4cd8-bfd6-90c4ad866f20 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.705205158Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=3031ac25-866f-45d8-a962-f149a2eae08b name=/runtime.v1.ImageService/PullImage Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.712367582Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1\"" Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.726431434Z" level=info msg="Ran pod sandbox 8ef4b7210274a6b52b1f275b2b88575b44667f9376ae93b8eea1a279639e87b6 with infra container: kube-system/bootstrap-kube-scheduler-localhost.localdomain/POD" id=36899efb-286e-47c4-9ae3-f2db60881092 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.735764128Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=97add4cd-f218-4de4-b553-874fdc88ea3a name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.737428412Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1 not found" id=97add4cd-f218-4de4-b553-874fdc88ea3a name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.744111121Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=b6741b46-fbfc-4850-b972-a81c86e0384d name=/runtime.v1.ImageService/PullImage Jan 16 20:42:50 localhost.localdomain crio[2304]: time="2024-01-16 20:42:50.747864552Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1\"" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.906527 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-kni-infra/coredns-localhost.localdomain" event=&{ID:8fbf03b752412e8c829ad5b819ca09f0 Type:ContainerStarted Data:5ad86afb32109f303c2cdedf57d80e1846b7f7664e3806c1ad8ebc5282b6c07b} Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.907751 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.913477 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.913797 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.914254 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.913816 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" event=&{ID:1cb3be1f2df5273e9b77f7050777bcbe Type:ContainerStarted Data:ad4d9c6ed5d6ab8a2a9b57904014b7c602165d8b2f56808bf0a162e61ca5e05d} Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.920255 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" event=&{ID:c3db590e56a311b869092b2d6b1724e5 Type:ContainerStarted Data:79c10015fd162b8e62ecb33ebeccbd5e476b9a518fb7eb7c00b519d5bb0eb934} Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.924184 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" event=&{ID:b8b0f2012ce2b145220be181d7a5aa55 Type:ContainerStarted Data:8ef4b7210274a6b52b1f275b2b88575b44667f9376ae93b8eea1a279639e87b6} Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.929304 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" event=&{ID:a6238b9f1f3a2f2bd2b4b1b0c7962bdd Type:ContainerStarted Data:26024c8016ef3e2119dd507f560533c94af57eb36863fae575a12ac36b7c6b00} Jan 16 20:42:50 localhost.localdomain kubelet.sh[2579]: I0116 20:42:50.933841 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" event=&{ID:05c96ce8daffad47cf2b15e2a67753ec Type:ContainerStarted Data:70686be8a2d87683a00828f4233d059638689db262cbef7d341c1f46aeb3fd09} Jan 16 20:42:51 localhost.localdomain bootkube.sh[7556]: Still waiting for the Kubernetes API: Get "https://localhost:6443/readyz": dial tcp [::1]:6443: connect: connection refused Jan 16 20:42:51 localhost.localdomain systemd[1]: Started crio-conmon-0788d090c8866cdce69b0836680b2f097ddf00276c15b5fb80e2d55e2c7e6c87.scope. Jan 16 20:42:51 localhost.localdomain systemd[1]: Started libcontainer container 0788d090c8866cdce69b0836680b2f097ddf00276c15b5fb80e2d55e2c7e6c87. Jan 16 20:42:51 localhost.localdomain crio[2304]: time="2024-01-16 20:42:51.312365592Z" level=info msg="Created container 0788d090c8866cdce69b0836680b2f097ddf00276c15b5fb80e2d55e2c7e6c87: openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain/cloud-credential-operator" id=b9cc0dcc-16a3-4420-bd32-6e75e9a36ca5 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:42:51 localhost.localdomain crio[2304]: time="2024-01-16 20:42:51.314855212Z" level=info msg="Starting container: 0788d090c8866cdce69b0836680b2f097ddf00276c15b5fb80e2d55e2c7e6c87" id=5263a7e6-8a8f-4c5c-b1de-ca95f2c9d45f name=/runtime.v1.RuntimeService/StartContainer Jan 16 20:42:51 localhost.localdomain crio[2304]: time="2024-01-16 20:42:51.356458264Z" level=info msg="Started container" PID=7850 containerID=0788d090c8866cdce69b0836680b2f097ddf00276c15b5fb80e2d55e2c7e6c87 description=openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain/cloud-credential-operator id=5263a7e6-8a8f-4c5c-b1de-ca95f2c9d45f name=/runtime.v1.RuntimeService/StartContainer sandboxID=26024c8016ef3e2119dd507f560533c94af57eb36863fae575a12ac36b7c6b00 Jan 16 20:42:51 localhost.localdomain kubelet.sh[2579]: I0116 20:42:51.958232 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" event=&{ID:a6238b9f1f3a2f2bd2b4b1b0c7962bdd Type:ContainerStarted Data:0788d090c8866cdce69b0836680b2f097ddf00276c15b5fb80e2d55e2c7e6c87} Jan 16 20:42:51 localhost.localdomain kubelet.sh[2579]: I0116 20:42:51.958808 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:42:51 localhost.localdomain kubelet.sh[2579]: I0116 20:42:51.959337 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:42:51 localhost.localdomain kubelet.sh[2579]: I0116 20:42:51.964368 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:42:51 localhost.localdomain kubelet.sh[2579]: I0116 20:42:51.964669 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:42:51 localhost.localdomain kubelet.sh[2579]: I0116 20:42:51.964738 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:42:51 localhost.localdomain kubelet.sh[2579]: I0116 20:42:51.964800 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:42:51 localhost.localdomain kubelet.sh[2579]: I0116 20:42:51.964881 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:42:51 localhost.localdomain kubelet.sh[2579]: I0116 20:42:51.965115 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:42:52 localhost.localdomain crio[2304]: time="2024-01-16 20:42:52.760299878Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa" id=b5b87587-40f3-4cf4-9cea-f3fe64cd84cd name=/runtime.v1.ImageService/PullImage Jan 16 20:42:52 localhost.localdomain crio[2304]: time="2024-01-16 20:42:52.764360586Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa" id=7db95331-20df-4e24-8b7f-92d9f5bf18b1 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:52 localhost.localdomain crio[2304]: time="2024-01-16 20:42:52.765483188Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:40e15091a793905eb63a02d951105fc5c5904bfb294f8004c052ac950c9ac44a,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa],Size_:522846560,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=7db95331-20df-4e24-8b7f-92d9f5bf18b1 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:42:52 localhost.localdomain crio[2304]: time="2024-01-16 20:42:52.770290663Z" level=info msg="Creating container: openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain/cluster-version-operator" id=adab4baa-747b-473a-b1b0-c25ecd030c9f name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:42:52 localhost.localdomain crio[2304]: time="2024-01-16 20:42:52.771242222Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:42:52 localhost.localdomain crio[2304]: time="2024-01-16 20:42:52.869374126Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1\"" Jan 16 20:42:52 localhost.localdomain crio[2304]: time="2024-01-16 20:42:52.887389566Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1\"" Jan 16 20:42:52 localhost.localdomain crio[2304]: time="2024-01-16 20:42:52.957353045Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1\"" Jan 16 20:42:52 localhost.localdomain kubelet.sh[2579]: I0116 20:42:52.966738 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:42:52 localhost.localdomain kubelet.sh[2579]: I0116 20:42:52.973508 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:42:52 localhost.localdomain kubelet.sh[2579]: I0116 20:42:52.973729 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:42:52 localhost.localdomain kubelet.sh[2579]: I0116 20:42:52.973799 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:42:53 localhost.localdomain systemd[1]: Started crio-conmon-c3ec8cfb2e6a164cb42f2f498f8588fcd9acb5c4332052283a291e5cfc99bc65.scope. Jan 16 20:42:53 localhost.localdomain systemd[1]: Started libcontainer container c3ec8cfb2e6a164cb42f2f498f8588fcd9acb5c4332052283a291e5cfc99bc65. Jan 16 20:42:53 localhost.localdomain crio[2304]: time="2024-01-16 20:42:53.615817250Z" level=info msg="Created container c3ec8cfb2e6a164cb42f2f498f8588fcd9acb5c4332052283a291e5cfc99bc65: openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain/cluster-version-operator" id=adab4baa-747b-473a-b1b0-c25ecd030c9f name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:42:53 localhost.localdomain crio[2304]: time="2024-01-16 20:42:53.619811713Z" level=info msg="Starting container: c3ec8cfb2e6a164cb42f2f498f8588fcd9acb5c4332052283a291e5cfc99bc65" id=21ae55a7-1ab9-44ae-bc83-6d82774e6b77 name=/runtime.v1.RuntimeService/StartContainer Jan 16 20:42:53 localhost.localdomain crio[2304]: time="2024-01-16 20:42:53.668372573Z" level=info msg="Started container" PID=7900 containerID=c3ec8cfb2e6a164cb42f2f498f8588fcd9acb5c4332052283a291e5cfc99bc65 description=openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain/cluster-version-operator id=21ae55a7-1ab9-44ae-bc83-6d82774e6b77 name=/runtime.v1.RuntimeService/StartContainer sandboxID=70686be8a2d87683a00828f4233d059638689db262cbef7d341c1f46aeb3fd09 Jan 16 20:42:53 localhost.localdomain kubelet.sh[2579]: I0116 20:42:53.979780 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" event=&{ID:05c96ce8daffad47cf2b15e2a67753ec Type:ContainerStarted Data:c3ec8cfb2e6a164cb42f2f498f8588fcd9acb5c4332052283a291e5cfc99bc65} Jan 16 20:42:53 localhost.localdomain kubelet.sh[2579]: I0116 20:42:53.980770 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:42:53 localhost.localdomain kubelet.sh[2579]: I0116 20:42:53.985546 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:42:53 localhost.localdomain kubelet.sh[2579]: I0116 20:42:53.985770 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:42:53 localhost.localdomain kubelet.sh[2579]: I0116 20:42:53.985821 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:42:54 localhost.localdomain kubelet.sh[2579]: I0116 20:42:54.985886 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:42:54 localhost.localdomain kubelet.sh[2579]: I0116 20:42:54.993879 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:42:54 localhost.localdomain kubelet.sh[2579]: I0116 20:42:54.994172 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:42:54 localhost.localdomain kubelet.sh[2579]: I0116 20:42:54.994227 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:42:55 localhost.localdomain kubelet.sh[2579]: I0116 20:42:55.781694 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:42:55 localhost.localdomain kubelet.sh[2579]: I0116 20:42:55.800516 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:42:55 localhost.localdomain kubelet.sh[2579]: I0116 20:42:55.801183 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:42:55 localhost.localdomain kubelet.sh[2579]: I0116 20:42:55.801249 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:42:56 localhost.localdomain approve-csr.sh[7933]: E0116 20:42:56.976509 7933 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:42:56 localhost.localdomain approve-csr.sh[7933]: E0116 20:42:56.978067 7933 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:42:56 localhost.localdomain approve-csr.sh[7933]: E0116 20:42:56.979260 7933 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:42:56 localhost.localdomain approve-csr.sh[7933]: E0116 20:42:56.980437 7933 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:42:56 localhost.localdomain approve-csr.sh[7933]: E0116 20:42:56.981668 7933 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:42:56 localhost.localdomain approve-csr.sh[7933]: The connection to the server localhost:6443 was refused - did you specify the right host or port? Jan 16 20:42:58 localhost.localdomain systemd[1]: run-runc-d13d10ac5144cf00fa2285e8d563c64157c650aab6cb5bb4e9d90ec7528edcc8-runc.CWBw85.mount: Deactivated successfully. Jan 16 20:42:58 localhost.localdomain kubelet.sh[2579]: I0116 20:42:58.572393 2579 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kni-infra/keepalived-localhost.localdomain" podUID=f3cb0bd9c64889e06acccc1066e67828 containerName="keepalived" probeResult=failure output=< Jan 16 20:42:58 localhost.localdomain kubelet.sh[2579]: /bin/bash: line 2: kill: `': not a pid or valid job spec Jan 16 20:42:58 localhost.localdomain kubelet.sh[2579]: > Jan 16 20:43:00 localhost.localdomain master-bmh-update.sh[7970]: E0116 20:43:00.429860 7970 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:43:00 localhost.localdomain master-bmh-update.sh[7970]: E0116 20:43:00.432209 7970 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:43:00 localhost.localdomain master-bmh-update.sh[7970]: E0116 20:43:00.434138 7970 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:43:00 localhost.localdomain master-bmh-update.sh[7970]: E0116 20:43:00.435660 7970 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:43:00 localhost.localdomain master-bmh-update.sh[7970]: E0116 20:43:00.437344 7970 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:43:00 localhost.localdomain master-bmh-update.sh[7970]: The connection to the server localhost:6443 was refused - did you specify the right host or port? Jan 16 20:43:00 localhost.localdomain master-bmh-update.sh[6528]: Waiting for BareMetalHosts to appear... Jan 16 20:43:00 localhost.localdomain kubelet.sh[2579]: I0116 20:43:00.470161 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:43:00 localhost.localdomain kubelet.sh[2579]: I0116 20:43:00.479860 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:43:00 localhost.localdomain kubelet.sh[2579]: I0116 20:43:00.480010 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:43:00 localhost.localdomain kubelet.sh[2579]: I0116 20:43:00.480048 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:43:06 localhost.localdomain kubelet.sh[2579]: I0116 20:43:06.066591 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:43:06 localhost.localdomain kubelet.sh[2579]: I0116 20:43:06.071665 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:43:06 localhost.localdomain kubelet.sh[2579]: I0116 20:43:06.072070 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:43:06 localhost.localdomain kubelet.sh[2579]: I0116 20:43:06.072436 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:43:08 localhost.localdomain systemd[1]: run-runc-d13d10ac5144cf00fa2285e8d563c64157c650aab6cb5bb4e9d90ec7528edcc8-runc.Lu5EOa.mount: Deactivated successfully. Jan 16 20:43:08 localhost.localdomain kubelet.sh[2579]: I0116 20:43:08.604676 2579 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kni-infra/keepalived-localhost.localdomain" podUID=f3cb0bd9c64889e06acccc1066e67828 containerName="keepalived" probeResult=failure output=< Jan 16 20:43:08 localhost.localdomain kubelet.sh[2579]: /bin/bash: line 2: kill: `': not a pid or valid job spec Jan 16 20:43:08 localhost.localdomain kubelet.sh[2579]: > Jan 16 20:43:08 localhost.localdomain kubelet.sh[2579]: I0116 20:43:08.604840 2579 kubelet.go:2529] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kni-infra/keepalived-localhost.localdomain" Jan 16 20:43:08 localhost.localdomain kubelet.sh[2579]: I0116 20:43:08.605665 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:43:08 localhost.localdomain kubelet.sh[2579]: I0116 20:43:08.608346 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:43:08 localhost.localdomain kubelet.sh[2579]: I0116 20:43:08.608440 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:43:08 localhost.localdomain kubelet.sh[2579]: I0116 20:43:08.608470 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:43:08 localhost.localdomain kubelet.sh[2579]: I0116 20:43:08.609304 2579 kuberuntime_manager.go:991] "Message for Container of pod" containerName="keepalived" containerStatusID={Type:cri-o ID:d13d10ac5144cf00fa2285e8d563c64157c650aab6cb5bb4e9d90ec7528edcc8} pod="openshift-kni-infra/keepalived-localhost.localdomain" containerMessage="Container keepalived failed liveness probe, will be restarted" Jan 16 20:43:08 localhost.localdomain kubelet.sh[2579]: I0116 20:43:08.609711 2579 kuberuntime_container.go:742] "Killing container with a grace period" pod="openshift-kni-infra/keepalived-localhost.localdomain" podUID=f3cb0bd9c64889e06acccc1066e67828 containerName="keepalived" containerID="cri-o://d13d10ac5144cf00fa2285e8d563c64157c650aab6cb5bb4e9d90ec7528edcc8" gracePeriod=65 Jan 16 20:43:08 localhost.localdomain crio[2304]: time="2024-01-16 20:43:08.611403698Z" level=info msg="Stopping container: d13d10ac5144cf00fa2285e8d563c64157c650aab6cb5bb4e9d90ec7528edcc8 (timeout: 65s)" id=7bbf159f-be09-4878-aac2-2c68c1f75f0d name=/runtime.v1.RuntimeService/StopContainer Jan 16 20:43:08 localhost.localdomain systemd[1]: crio-d13d10ac5144cf00fa2285e8d563c64157c650aab6cb5bb4e9d90ec7528edcc8.scope: Deactivated successfully. Jan 16 20:43:08 localhost.localdomain conmon[7348]: conmon d13d10ac5144cf00fa22 : container 7369 exited with status 143 Jan 16 20:43:08 localhost.localdomain conmon[7348]: conmon d13d10ac5144cf00fa22 : Failed to open cgroups file: /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3cb0bd9c64889e06acccc1066e67828.slice/crio-d13d10ac5144cf00fa2285e8d563c64157c650aab6cb5bb4e9d90ec7528edcc8.scope/memory.events Jan 16 20:43:08 localhost.localdomain systemd[1]: crio-conmon-d13d10ac5144cf00fa2285e8d563c64157c650aab6cb5bb4e9d90ec7528edcc8.scope: Deactivated successfully. Jan 16 20:43:08 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-8fb377613461a47e1f07a9c22ad6913599791da70162a8dfa6ac87e0eeb8c698-merged.mount: Deactivated successfully. Jan 16 20:43:08 localhost.localdomain crio[2304]: time="2024-01-16 20:43:08.911158380Z" level=info msg="Stopped container d13d10ac5144cf00fa2285e8d563c64157c650aab6cb5bb4e9d90ec7528edcc8: openshift-kni-infra/keepalived-localhost.localdomain/keepalived" id=7bbf159f-be09-4878-aac2-2c68c1f75f0d name=/runtime.v1.RuntimeService/StopContainer Jan 16 20:43:08 localhost.localdomain crio[2304]: time="2024-01-16 20:43:08.913639050Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:594f2e4ef75bf8bfd342670ddd1d50bd97888671f13b8b566af8c568285de689" id=bddf7d19-2dbb-4c5b-bed0-40900146d3b0 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:43:08 localhost.localdomain crio[2304]: time="2024-01-16 20:43:08.914261155Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:5cb5dd5856f0cbd66a3227a48d327384ad2ba615d2e9f2313428232427b8aeb7,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:594f2e4ef75bf8bfd342670ddd1d50bd97888671f13b8b566af8c568285de689],Size_:537687465,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=bddf7d19-2dbb-4c5b-bed0-40900146d3b0 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:43:08 localhost.localdomain crio[2304]: time="2024-01-16 20:43:08.923816494Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:594f2e4ef75bf8bfd342670ddd1d50bd97888671f13b8b566af8c568285de689" id=92cacc81-a954-4e31-b06c-672d1b352ec3 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:43:08 localhost.localdomain crio[2304]: time="2024-01-16 20:43:08.925374244Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:5cb5dd5856f0cbd66a3227a48d327384ad2ba615d2e9f2313428232427b8aeb7,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:594f2e4ef75bf8bfd342670ddd1d50bd97888671f13b8b566af8c568285de689],Size_:537687465,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=92cacc81-a954-4e31-b06c-672d1b352ec3 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:43:08 localhost.localdomain crio[2304]: time="2024-01-16 20:43:08.930741868Z" level=info msg="Creating container: openshift-kni-infra/keepalived-localhost.localdomain/keepalived" id=549a9d95-c8e7-4c97-a454-7a502958d244 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:43:08 localhost.localdomain crio[2304]: time="2024-01-16 20:43:08.932637223Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:43:09 localhost.localdomain systemd[1]: Started crio-conmon-4d120e2e2e8ca56246d26010aef81ab977c27638ea1c395a3bacc9d53efe0757.scope. Jan 16 20:43:09 localhost.localdomain systemd[1]: Started libcontainer container 4d120e2e2e8ca56246d26010aef81ab977c27638ea1c395a3bacc9d53efe0757. Jan 16 20:43:09 localhost.localdomain kubelet.sh[2579]: I0116 20:43:09.132096 2579 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-kni-infra_keepalived-localhost.localdomain_f3cb0bd9c64889e06acccc1066e67828/keepalived/0.log" Jan 16 20:43:09 localhost.localdomain kubelet.sh[2579]: I0116 20:43:09.132276 2579 generic.go:334] "Generic (PLEG): container finished" podID=f3cb0bd9c64889e06acccc1066e67828 containerID="d13d10ac5144cf00fa2285e8d563c64157c650aab6cb5bb4e9d90ec7528edcc8" exitCode=143 Jan 16 20:43:09 localhost.localdomain kubelet.sh[2579]: I0116 20:43:09.132348 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-kni-infra/keepalived-localhost.localdomain" event=&{ID:f3cb0bd9c64889e06acccc1066e67828 Type:ContainerDied Data:d13d10ac5144cf00fa2285e8d563c64157c650aab6cb5bb4e9d90ec7528edcc8} Jan 16 20:43:09 localhost.localdomain crio[2304]: time="2024-01-16 20:43:09.229380508Z" level=info msg="Created container 4d120e2e2e8ca56246d26010aef81ab977c27638ea1c395a3bacc9d53efe0757: openshift-kni-infra/keepalived-localhost.localdomain/keepalived" id=549a9d95-c8e7-4c97-a454-7a502958d244 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:43:09 localhost.localdomain crio[2304]: time="2024-01-16 20:43:09.231377939Z" level=info msg="Starting container: 4d120e2e2e8ca56246d26010aef81ab977c27638ea1c395a3bacc9d53efe0757" id=8068a17c-489b-49e3-9564-0067348423a4 name=/runtime.v1.RuntimeService/StartContainer Jan 16 20:43:09 localhost.localdomain crio[2304]: time="2024-01-16 20:43:09.283462645Z" level=info msg="Started container" PID=8051 containerID=4d120e2e2e8ca56246d26010aef81ab977c27638ea1c395a3bacc9d53efe0757 description=openshift-kni-infra/keepalived-localhost.localdomain/keepalived id=8068a17c-489b-49e3-9564-0067348423a4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7ebdc370e2c6148b8fcf32f4fc2cc95081bf61cd8d6252b3c4013c6ed54602ca Jan 16 20:43:10 localhost.localdomain kubelet.sh[2579]: I0116 20:43:10.154455 2579 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-kni-infra_keepalived-localhost.localdomain_f3cb0bd9c64889e06acccc1066e67828/keepalived/0.log" Jan 16 20:43:10 localhost.localdomain kubelet.sh[2579]: I0116 20:43:10.156496 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:43:10 localhost.localdomain kubelet.sh[2579]: I0116 20:43:10.160128 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-kni-infra/keepalived-localhost.localdomain" event=&{ID:f3cb0bd9c64889e06acccc1066e67828 Type:ContainerStarted Data:4d120e2e2e8ca56246d26010aef81ab977c27638ea1c395a3bacc9d53efe0757} Jan 16 20:43:10 localhost.localdomain kubelet.sh[2579]: I0116 20:43:10.162612 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:43:10 localhost.localdomain kubelet.sh[2579]: I0116 20:43:10.163168 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:43:10 localhost.localdomain kubelet.sh[2579]: I0116 20:43:10.163227 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:43:11 localhost.localdomain kubelet.sh[2579]: I0116 20:43:11.160216 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:43:11 localhost.localdomain kubelet.sh[2579]: I0116 20:43:11.163276 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:43:11 localhost.localdomain kubelet.sh[2579]: I0116 20:43:11.163376 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:43:11 localhost.localdomain kubelet.sh[2579]: I0116 20:43:11.163405 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:43:16 localhost.localdomain kubelet.sh[2579]: I0116 20:43:16.131461 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:43:16 localhost.localdomain kubelet.sh[2579]: I0116 20:43:16.143312 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:43:16 localhost.localdomain kubelet.sh[2579]: I0116 20:43:16.143472 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:43:16 localhost.localdomain kubelet.sh[2579]: I0116 20:43:16.143601 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:43:17 localhost.localdomain approve-csr.sh[8085]: E0116 20:43:17.432150 8085 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:43:17 localhost.localdomain approve-csr.sh[8085]: E0116 20:43:17.433756 8085 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:43:17 localhost.localdomain approve-csr.sh[8085]: E0116 20:43:17.435613 8085 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:43:17 localhost.localdomain approve-csr.sh[8085]: E0116 20:43:17.437483 8085 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:43:17 localhost.localdomain approve-csr.sh[8085]: E0116 20:43:17.438788 8085 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:43:17 localhost.localdomain approve-csr.sh[8085]: The connection to the server localhost:6443 was refused - did you specify the right host or port? Jan 16 20:43:18 localhost.localdomain systemd[1]: crio-c3ec8cfb2e6a164cb42f2f498f8588fcd9acb5c4332052283a291e5cfc99bc65.scope: Deactivated successfully. Jan 16 20:43:18 localhost.localdomain conmon[7888]: conmon c3ec8cfb2e6a164cb42f : container 7900 exited with status 255 Jan 16 20:43:18 localhost.localdomain systemd[1]: crio-conmon-c3ec8cfb2e6a164cb42f2f498f8588fcd9acb5c4332052283a291e5cfc99bc65.scope: Deactivated successfully. Jan 16 20:43:19 localhost.localdomain kubelet.sh[2579]: I0116 20:43:19.227315 2579 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_bootstrap-cluster-version-operator-localhost.localdomain_05c96ce8daffad47cf2b15e2a67753ec/cluster-version-operator/0.log" Jan 16 20:43:19 localhost.localdomain kubelet.sh[2579]: I0116 20:43:19.227588 2579 generic.go:334] "Generic (PLEG): container finished" podID=05c96ce8daffad47cf2b15e2a67753ec containerID="c3ec8cfb2e6a164cb42f2f498f8588fcd9acb5c4332052283a291e5cfc99bc65" exitCode=255 Jan 16 20:43:19 localhost.localdomain kubelet.sh[2579]: I0116 20:43:19.227712 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" event=&{ID:05c96ce8daffad47cf2b15e2a67753ec Type:ContainerDied Data:c3ec8cfb2e6a164cb42f2f498f8588fcd9acb5c4332052283a291e5cfc99bc65} Jan 16 20:43:19 localhost.localdomain kubelet.sh[2579]: I0116 20:43:19.228796 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:43:19 localhost.localdomain kubelet.sh[2579]: I0116 20:43:19.234268 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:43:19 localhost.localdomain kubelet.sh[2579]: I0116 20:43:19.234722 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:43:19 localhost.localdomain kubelet.sh[2579]: I0116 20:43:19.235103 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:43:19 localhost.localdomain kubelet.sh[2579]: I0116 20:43:19.235761 2579 scope.go:115] "RemoveContainer" containerID="c3ec8cfb2e6a164cb42f2f498f8588fcd9acb5c4332052283a291e5cfc99bc65" Jan 16 20:43:19 localhost.localdomain crio[2304]: time="2024-01-16 20:43:19.245890899Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa" id=249a3384-5ce2-41df-b350-6db3f5c92775 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:43:19 localhost.localdomain crio[2304]: time="2024-01-16 20:43:19.246865564Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:40e15091a793905eb63a02d951105fc5c5904bfb294f8004c052ac950c9ac44a,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa],Size_:522846560,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=249a3384-5ce2-41df-b350-6db3f5c92775 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:43:19 localhost.localdomain crio[2304]: time="2024-01-16 20:43:19.249049074Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa" id=7fc3eea4-eb01-49bb-8110-c0fd6606fd3d name=/runtime.v1.ImageService/PullImage Jan 16 20:43:19 localhost.localdomain crio[2304]: time="2024-01-16 20:43:19.257341244Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa\"" Jan 16 20:43:20 localhost.localdomain kubelet.sh[2579]: I0116 20:43:20.478266 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:43:20 localhost.localdomain kubelet.sh[2579]: I0116 20:43:20.494313 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:43:20 localhost.localdomain kubelet.sh[2579]: I0116 20:43:20.494449 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:43:20 localhost.localdomain kubelet.sh[2579]: I0116 20:43:20.494563 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:43:20 localhost.localdomain master-bmh-update.sh[8113]: E0116 20:43:20.900289 8113 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:43:20 localhost.localdomain master-bmh-update.sh[8113]: E0116 20:43:20.904702 8113 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:43:20 localhost.localdomain master-bmh-update.sh[8113]: E0116 20:43:20.916674 8113 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:43:20 localhost.localdomain master-bmh-update.sh[8113]: E0116 20:43:20.925351 8113 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:43:20 localhost.localdomain master-bmh-update.sh[8113]: E0116 20:43:20.927683 8113 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:43:20 localhost.localdomain master-bmh-update.sh[8113]: The connection to the server localhost:6443 was refused - did you specify the right host or port? Jan 16 20:43:20 localhost.localdomain master-bmh-update.sh[6528]: Waiting for BareMetalHosts to appear... Jan 16 20:43:21 localhost.localdomain crio[2304]: time="2024-01-16 20:43:21.900214000Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa" id=7fc3eea4-eb01-49bb-8110-c0fd6606fd3d name=/runtime.v1.ImageService/PullImage Jan 16 20:43:21 localhost.localdomain crio[2304]: time="2024-01-16 20:43:21.912685382Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa" id=009621d9-2a6a-4fbf-af51-606cf6fd4806 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:43:21 localhost.localdomain crio[2304]: time="2024-01-16 20:43:21.913634803Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:40e15091a793905eb63a02d951105fc5c5904bfb294f8004c052ac950c9ac44a,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa],Size_:522846560,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=009621d9-2a6a-4fbf-af51-606cf6fd4806 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:43:21 localhost.localdomain crio[2304]: time="2024-01-16 20:43:21.917777392Z" level=info msg="Creating container: openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain/cluster-version-operator" id=44882f9f-654b-4ca1-b54b-83083f95fa17 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:43:21 localhost.localdomain crio[2304]: time="2024-01-16 20:43:21.918757180Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:43:22 localhost.localdomain systemd[1]: Started crio-conmon-90a34620cf7fa31e2700acd6399c77d91e517493b7a2e628fda8f544e7a0b88d.scope. Jan 16 20:43:22 localhost.localdomain systemd[1]: Started libcontainer container 90a34620cf7fa31e2700acd6399c77d91e517493b7a2e628fda8f544e7a0b88d. Jan 16 20:43:23 localhost.localdomain crio[2304]: time="2024-01-16 20:43:23.853692727Z" level=info msg="Created container 90a34620cf7fa31e2700acd6399c77d91e517493b7a2e628fda8f544e7a0b88d: openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain/cluster-version-operator" id=44882f9f-654b-4ca1-b54b-83083f95fa17 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:43:23 localhost.localdomain crio[2304]: time="2024-01-16 20:43:23.858602033Z" level=info msg="Starting container: 90a34620cf7fa31e2700acd6399c77d91e517493b7a2e628fda8f544e7a0b88d" id=cd96f612-bac7-48c6-bd1d-3072fdbc4cb6 name=/runtime.v1.RuntimeService/StartContainer Jan 16 20:43:23 localhost.localdomain crio[2304]: time="2024-01-16 20:43:23.955256999Z" level=info msg="Started container" PID=8134 containerID=90a34620cf7fa31e2700acd6399c77d91e517493b7a2e628fda8f544e7a0b88d description=openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain/cluster-version-operator id=cd96f612-bac7-48c6-bd1d-3072fdbc4cb6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=70686be8a2d87683a00828f4233d059638689db262cbef7d341c1f46aeb3fd09 Jan 16 20:43:24 localhost.localdomain kubelet.sh[2579]: I0116 20:43:24.343682 2579 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_bootstrap-cluster-version-operator-localhost.localdomain_05c96ce8daffad47cf2b15e2a67753ec/cluster-version-operator/0.log" Jan 16 20:43:24 localhost.localdomain kubelet.sh[2579]: I0116 20:43:24.350076 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" event=&{ID:05c96ce8daffad47cf2b15e2a67753ec Type:ContainerStarted Data:90a34620cf7fa31e2700acd6399c77d91e517493b7a2e628fda8f544e7a0b88d} Jan 16 20:43:24 localhost.localdomain kubelet.sh[2579]: I0116 20:43:24.352377 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:43:24 localhost.localdomain kubelet.sh[2579]: I0116 20:43:24.366287 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:43:24 localhost.localdomain kubelet.sh[2579]: I0116 20:43:24.366537 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:43:24 localhost.localdomain kubelet.sh[2579]: I0116 20:43:24.366574 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:43:26 localhost.localdomain kubelet.sh[2579]: I0116 20:43:26.249149 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:43:26 localhost.localdomain kubelet.sh[2579]: I0116 20:43:26.255361 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:43:26 localhost.localdomain kubelet.sh[2579]: I0116 20:43:26.255538 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:43:26 localhost.localdomain kubelet.sh[2579]: I0116 20:43:26.255574 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:43:36 localhost.localdomain kubelet.sh[2579]: I0116 20:43:36.326623 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:43:36 localhost.localdomain kubelet.sh[2579]: I0116 20:43:36.340540 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:43:36 localhost.localdomain kubelet.sh[2579]: I0116 20:43:36.340863 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:43:36 localhost.localdomain kubelet.sh[2579]: I0116 20:43:36.341068 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:43:38 localhost.localdomain approve-csr.sh[8190]: E0116 20:43:38.126661 8190 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:43:38 localhost.localdomain approve-csr.sh[8190]: E0116 20:43:38.131038 8190 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:43:38 localhost.localdomain approve-csr.sh[8190]: E0116 20:43:38.132451 8190 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:43:38 localhost.localdomain approve-csr.sh[8190]: E0116 20:43:38.134064 8190 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:43:38 localhost.localdomain approve-csr.sh[8190]: E0116 20:43:38.135332 8190 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:43:38 localhost.localdomain approve-csr.sh[8190]: The connection to the server localhost:6443 was refused - did you specify the right host or port? Jan 16 20:43:38 localhost.localdomain kubelet.sh[2579]: I0116 20:43:38.617559 2579 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kni-infra/keepalived-localhost.localdomain" podUID=f3cb0bd9c64889e06acccc1066e67828 containerName="keepalived" probeResult=failure output=< Jan 16 20:43:38 localhost.localdomain kubelet.sh[2579]: /bin/bash: line 2: kill: `': not a pid or valid job spec Jan 16 20:43:38 localhost.localdomain kubelet.sh[2579]: > Jan 16 20:43:41 localhost.localdomain master-bmh-update.sh[8227]: E0116 20:43:41.271380 8227 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:43:41 localhost.localdomain master-bmh-update.sh[8227]: E0116 20:43:41.272856 8227 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:43:41 localhost.localdomain master-bmh-update.sh[8227]: E0116 20:43:41.273733 8227 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:43:41 localhost.localdomain master-bmh-update.sh[8227]: E0116 20:43:41.276110 8227 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:43:41 localhost.localdomain master-bmh-update.sh[8227]: E0116 20:43:41.276878 8227 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 20:43:41 localhost.localdomain master-bmh-update.sh[8227]: The connection to the server localhost:6443 was refused - did you specify the right host or port? Jan 16 20:43:41 localhost.localdomain master-bmh-update.sh[6528]: Waiting for BareMetalHosts to appear... Jan 16 20:43:43 localhost.localdomain kubelet.sh[2579]: I0116 20:43:43.730835 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 20:43:43 localhost.localdomain kubelet.sh[2579]: I0116 20:43:43.731291 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Pending Jan 16 20:43:43 localhost.localdomain kubelet.sh[2579]: I0116 20:43:43.731678 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Pending Jan 16 20:43:43 localhost.localdomain kubelet.sh[2579]: I0116 20:43:43.731752 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 20:43:43 localhost.localdomain kubelet.sh[2579]: I0116 20:43:43.731802 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 20:43:43 localhost.localdomain kubelet.sh[2579]: I0116 20:43:43.731877 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 20:43:43 localhost.localdomain kubelet.sh[2579]: I0116 20:43:43.732061 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Pending Jan 16 20:43:43 localhost.localdomain kubelet.sh[2579]: I0116 20:43:43.732149 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 20:43:43 localhost.localdomain kubelet.sh[2579]: I0116 20:43:43.732202 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 20:43:46 localhost.localdomain kubelet.sh[2579]: I0116 20:43:46.410592 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:43:46 localhost.localdomain kubelet.sh[2579]: I0116 20:43:46.433230 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:43:46 localhost.localdomain kubelet.sh[2579]: I0116 20:43:46.434353 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:43:46 localhost.localdomain kubelet.sh[2579]: I0116 20:43:46.434424 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:43:47 localhost.localdomain crio[2304]: time="2024-01-16 20:43:47.602673272Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=cea10e8c-cf05-4faf-bfce-d7b0e17753db name=/runtime.v1.ImageService/PullImage Jan 16 20:43:47 localhost.localdomain crio[2304]: time="2024-01-16 20:43:47.611089875Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=4000aeb5-2d3d-4d65-b488-2be4b0fbe902 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:43:47 localhost.localdomain crio[2304]: time="2024-01-16 20:43:47.619877124Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=4000aeb5-2d3d-4d65-b488-2be4b0fbe902 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:43:47 localhost.localdomain crio[2304]: time="2024-01-16 20:43:47.627434305Z" level=info msg="Creating container: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/kube-controller-manager" id=85033b5d-e7f9-4463-abc3-486b6b2bd656 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:43:47 localhost.localdomain crio[2304]: time="2024-01-16 20:43:47.628684046Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:43:47 localhost.localdomain crio[2304]: time="2024-01-16 20:43:47.663373146Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=b6741b46-fbfc-4850-b972-a81c86e0384d name=/runtime.v1.ImageService/PullImage Jan 16 20:43:47 localhost.localdomain crio[2304]: time="2024-01-16 20:43:47.667772891Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=a7f72ba4-3c9b-4f92-8a35-a9cf48183777 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:43:47 localhost.localdomain crio[2304]: time="2024-01-16 20:43:47.674887427Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=a7f72ba4-3c9b-4f92-8a35-a9cf48183777 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:43:47 localhost.localdomain crio[2304]: time="2024-01-16 20:43:47.679129694Z" level=info msg="Creating container: kube-system/bootstrap-kube-scheduler-localhost.localdomain/kube-scheduler" id=55edd8b4-399e-4d24-9e79-9cf1e87ac2df name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:43:47 localhost.localdomain crio[2304]: time="2024-01-16 20:43:47.679649118Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:43:47 localhost.localdomain crio[2304]: time="2024-01-16 20:43:47.709571116Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=3031ac25-866f-45d8-a962-f149a2eae08b name=/runtime.v1.ImageService/PullImage Jan 16 20:43:47 localhost.localdomain crio[2304]: time="2024-01-16 20:43:47.712401182Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=cd0b9f45-0c8a-44bb-ae61-2ca56e4ca066 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:43:47 localhost.localdomain crio[2304]: time="2024-01-16 20:43:47.732291883Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=cd0b9f45-0c8a-44bb-ae61-2ca56e4ca066 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:43:47 localhost.localdomain crio[2304]: time="2024-01-16 20:43:47.752391076Z" level=info msg="Creating container: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/setup" id=3676d2dd-c840-4fe5-ab90-15389038903d name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:43:47 localhost.localdomain crio[2304]: time="2024-01-16 20:43:47.753727777Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:43:48 localhost.localdomain systemd[1]: Started crio-conmon-14037eeba10a1b747479911dd868e3167adaad0a3361b3f5be818e4a800280dc.scope. Jan 16 20:43:48 localhost.localdomain systemd[1]: Started crio-conmon-c8d5e0778043f084685e3eb73a6e1fe79360a9f6d121776ebfa277ff2971c243.scope. Jan 16 20:43:48 localhost.localdomain systemd[1]: Started crio-conmon-0a595a7350da388b8c61b7e704112d1c886edf09068e421c56d19d38e17f400f.scope. Jan 16 20:43:48 localhost.localdomain systemd[1]: Started libcontainer container 14037eeba10a1b747479911dd868e3167adaad0a3361b3f5be818e4a800280dc. Jan 16 20:43:48 localhost.localdomain systemd[1]: Started libcontainer container c8d5e0778043f084685e3eb73a6e1fe79360a9f6d121776ebfa277ff2971c243. Jan 16 20:43:48 localhost.localdomain kubelet.sh[2579]: I0116 20:43:48.648612 2579 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kni-infra/keepalived-localhost.localdomain" podUID=f3cb0bd9c64889e06acccc1066e67828 containerName="keepalived" probeResult=failure output=< Jan 16 20:43:48 localhost.localdomain kubelet.sh[2579]: /bin/bash: line 2: kill: `': not a pid or valid job spec Jan 16 20:43:48 localhost.localdomain kubelet.sh[2579]: > Jan 16 20:43:48 localhost.localdomain systemd[1]: Started libcontainer container 0a595a7350da388b8c61b7e704112d1c886edf09068e421c56d19d38e17f400f. Jan 16 20:43:48 localhost.localdomain systemd[1]: run-runc-14037eeba10a1b747479911dd868e3167adaad0a3361b3f5be818e4a800280dc-runc.zZn2gg.mount: Deactivated successfully. Jan 16 20:43:48 localhost.localdomain crio[2304]: time="2024-01-16 20:43:48.722263972Z" level=info msg="Created container c8d5e0778043f084685e3eb73a6e1fe79360a9f6d121776ebfa277ff2971c243: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/setup" id=3676d2dd-c840-4fe5-ab90-15389038903d name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:43:48 localhost.localdomain crio[2304]: time="2024-01-16 20:43:48.724185593Z" level=info msg="Starting container: c8d5e0778043f084685e3eb73a6e1fe79360a9f6d121776ebfa277ff2971c243" id=07ad6966-be43-4f0b-8d47-11726c248327 name=/runtime.v1.RuntimeService/StartContainer Jan 16 20:43:48 localhost.localdomain crio[2304]: time="2024-01-16 20:43:48.725188976Z" level=info msg="Created container 14037eeba10a1b747479911dd868e3167adaad0a3361b3f5be818e4a800280dc: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/kube-controller-manager" id=85033b5d-e7f9-4463-abc3-486b6b2bd656 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:43:48 localhost.localdomain crio[2304]: time="2024-01-16 20:43:48.725771114Z" level=info msg="Starting container: 14037eeba10a1b747479911dd868e3167adaad0a3361b3f5be818e4a800280dc" id=69074d40-291b-4561-bced-46fdeb163556 name=/runtime.v1.RuntimeService/StartContainer Jan 16 20:43:48 localhost.localdomain crio[2304]: time="2024-01-16 20:43:48.774410636Z" level=info msg="Started container" PID=8289 containerID=14037eeba10a1b747479911dd868e3167adaad0a3361b3f5be818e4a800280dc description=kube-system/bootstrap-kube-controller-manager-localhost.localdomain/kube-controller-manager id=69074d40-291b-4561-bced-46fdeb163556 name=/runtime.v1.RuntimeService/StartContainer sandboxID=79c10015fd162b8e62ecb33ebeccbd5e476b9a518fb7eb7c00b519d5bb0eb934 Jan 16 20:43:48 localhost.localdomain crio[2304]: time="2024-01-16 20:43:48.788250787Z" level=info msg="Started container" PID=8296 containerID=c8d5e0778043f084685e3eb73a6e1fe79360a9f6d121776ebfa277ff2971c243 description=openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/setup id=07ad6966-be43-4f0b-8d47-11726c248327 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ad4d9c6ed5d6ab8a2a9b57904014b7c602165d8b2f56808bf0a162e61ca5e05d Jan 16 20:43:48 localhost.localdomain systemd[1]: crio-c8d5e0778043f084685e3eb73a6e1fe79360a9f6d121776ebfa277ff2971c243.scope: Deactivated successfully. Jan 16 20:43:48 localhost.localdomain conmon[8251]: conmon c8d5e0778043f084685e : Failed to open cgroups file: /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cb3be1f2df5273e9b77f7050777bcbe.slice/crio-c8d5e0778043f084685e3eb73a6e1fe79360a9f6d121776ebfa277ff2971c243.scope/memory.events Jan 16 20:43:48 localhost.localdomain crio[2304]: time="2024-01-16 20:43:48.810835380Z" level=info msg="Created container 0a595a7350da388b8c61b7e704112d1c886edf09068e421c56d19d38e17f400f: kube-system/bootstrap-kube-scheduler-localhost.localdomain/kube-scheduler" id=55edd8b4-399e-4d24-9e79-9cf1e87ac2df name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:43:48 localhost.localdomain crio[2304]: time="2024-01-16 20:43:48.814221313Z" level=info msg="Starting container: 0a595a7350da388b8c61b7e704112d1c886edf09068e421c56d19d38e17f400f" id=915a658f-f4e8-4128-ade9-cdecaec9f4b3 name=/runtime.v1.RuntimeService/StartContainer Jan 16 20:43:48 localhost.localdomain crio[2304]: time="2024-01-16 20:43:48.817561727Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:384b0d1665ce12136ede1c708c4542d12eac1f788528f8bc77cb52d871057437" id=0844cd5e-34d2-49c6-b63c-5ad72dbf556e name=/runtime.v1.ImageService/ImageStatus Jan 16 20:43:48 localhost.localdomain crio[2304]: time="2024-01-16 20:43:48.818622381Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:384b0d1665ce12136ede1c708c4542d12eac1f788528f8bc77cb52d871057437 not found" id=0844cd5e-34d2-49c6-b63c-5ad72dbf556e name=/runtime.v1.ImageService/ImageStatus Jan 16 20:43:48 localhost.localdomain crio[2304]: time="2024-01-16 20:43:48.823064179Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:384b0d1665ce12136ede1c708c4542d12eac1f788528f8bc77cb52d871057437" id=c33a50fc-b0db-4e26-9739-30448d884a29 name=/runtime.v1.ImageService/PullImage Jan 16 20:43:48 localhost.localdomain crio[2304]: time="2024-01-16 20:43:48.840847231Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:384b0d1665ce12136ede1c708c4542d12eac1f788528f8bc77cb52d871057437\"" Jan 16 20:43:48 localhost.localdomain systemd[1]: crio-conmon-c8d5e0778043f084685e3eb73a6e1fe79360a9f6d121776ebfa277ff2971c243.scope: Deactivated successfully. Jan 16 20:43:48 localhost.localdomain crio[2304]: time="2024-01-16 20:43:48.858540930Z" level=info msg="Started container" PID=8319 containerID=0a595a7350da388b8c61b7e704112d1c886edf09068e421c56d19d38e17f400f description=kube-system/bootstrap-kube-scheduler-localhost.localdomain/kube-scheduler id=915a658f-f4e8-4128-ade9-cdecaec9f4b3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8ef4b7210274a6b52b1f275b2b88575b44667f9376ae93b8eea1a279639e87b6 Jan 16 20:43:49 localhost.localdomain conmon[8123]: conmon 90a34620cf7fa31e2700 : container 8134 exited with status 255 Jan 16 20:43:49 localhost.localdomain systemd[1]: crio-90a34620cf7fa31e2700acd6399c77d91e517493b7a2e628fda8f544e7a0b88d.scope: Deactivated successfully. Jan 16 20:43:49 localhost.localdomain systemd[1]: crio-conmon-90a34620cf7fa31e2700acd6399c77d91e517493b7a2e628fda8f544e7a0b88d.scope: Deactivated successfully. Jan 16 20:43:49 localhost.localdomain kubelet.sh[2579]: I0116 20:43:49.639124 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" event=&{ID:c3db590e56a311b869092b2d6b1724e5 Type:ContainerStarted Data:14037eeba10a1b747479911dd868e3167adaad0a3361b3f5be818e4a800280dc} Jan 16 20:43:49 localhost.localdomain kubelet.sh[2579]: I0116 20:43:49.644202 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" event=&{ID:b8b0f2012ce2b145220be181d7a5aa55 Type:ContainerStarted Data:0a595a7350da388b8c61b7e704112d1c886edf09068e421c56d19d38e17f400f} Jan 16 20:43:49 localhost.localdomain kubelet.sh[2579]: I0116 20:43:49.645035 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:43:49 localhost.localdomain kubelet.sh[2579]: I0116 20:43:49.648774 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:43:49 localhost.localdomain kubelet.sh[2579]: I0116 20:43:49.648881 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:43:49 localhost.localdomain kubelet.sh[2579]: I0116 20:43:49.649003 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:43:49 localhost.localdomain kubelet.sh[2579]: I0116 20:43:49.652146 2579 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_bootstrap-cluster-version-operator-localhost.localdomain_05c96ce8daffad47cf2b15e2a67753ec/cluster-version-operator/1.log" Jan 16 20:43:49 localhost.localdomain kubelet.sh[2579]: I0116 20:43:49.654213 2579 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_bootstrap-cluster-version-operator-localhost.localdomain_05c96ce8daffad47cf2b15e2a67753ec/cluster-version-operator/0.log" Jan 16 20:43:49 localhost.localdomain kubelet.sh[2579]: I0116 20:43:49.654392 2579 generic.go:334] "Generic (PLEG): container finished" podID=05c96ce8daffad47cf2b15e2a67753ec containerID="90a34620cf7fa31e2700acd6399c77d91e517493b7a2e628fda8f544e7a0b88d" exitCode=255 Jan 16 20:43:49 localhost.localdomain kubelet.sh[2579]: I0116 20:43:49.654544 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" event=&{ID:05c96ce8daffad47cf2b15e2a67753ec Type:ContainerDied Data:90a34620cf7fa31e2700acd6399c77d91e517493b7a2e628fda8f544e7a0b88d} Jan 16 20:43:49 localhost.localdomain kubelet.sh[2579]: I0116 20:43:49.654892 2579 scope.go:115] "RemoveContainer" containerID="c3ec8cfb2e6a164cb42f2f498f8588fcd9acb5c4332052283a291e5cfc99bc65" Jan 16 20:43:49 localhost.localdomain kubelet.sh[2579]: I0116 20:43:49.655543 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:43:49 localhost.localdomain kubelet.sh[2579]: I0116 20:43:49.660711 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:43:49 localhost.localdomain kubelet.sh[2579]: I0116 20:43:49.660899 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:43:49 localhost.localdomain kubelet.sh[2579]: I0116 20:43:49.661071 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:43:49 localhost.localdomain kubelet.sh[2579]: I0116 20:43:49.661393 2579 scope.go:115] "RemoveContainer" containerID="90a34620cf7fa31e2700acd6399c77d91e517493b7a2e628fda8f544e7a0b88d" Jan 16 20:43:49 localhost.localdomain kubelet.sh[2579]: E0116 20:43:49.662292 2579 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-version-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-version-operator pod=bootstrap-cluster-version-operator-localhost.localdomain_openshift-cluster-version(05c96ce8daffad47cf2b15e2a67753ec)\"" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" podUID=05c96ce8daffad47cf2b15e2a67753ec Jan 16 20:43:49 localhost.localdomain crio[2304]: time="2024-01-16 20:43:49.662922817Z" level=info msg="Removing container: c3ec8cfb2e6a164cb42f2f498f8588fcd9acb5c4332052283a291e5cfc99bc65" id=992d1026-f6ef-4284-b221-00b6eb65de45 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 20:43:49 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-51546eaea5859685c53fd1fbb346bdda650bfd26a11f3fb311433180bcaaaff0-merged.mount: Deactivated successfully. Jan 16 20:43:49 localhost.localdomain kubelet.sh[2579]: I0116 20:43:49.678205 2579 generic.go:334] "Generic (PLEG): container finished" podID=1cb3be1f2df5273e9b77f7050777bcbe containerID="c8d5e0778043f084685e3eb73a6e1fe79360a9f6d121776ebfa277ff2971c243" exitCode=0 Jan 16 20:43:49 localhost.localdomain kubelet.sh[2579]: I0116 20:43:49.678312 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" event=&{ID:1cb3be1f2df5273e9b77f7050777bcbe Type:ContainerDied Data:c8d5e0778043f084685e3eb73a6e1fe79360a9f6d121776ebfa277ff2971c243} Jan 16 20:43:49 localhost.localdomain kubelet.sh[2579]: I0116 20:43:49.678764 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:43:49 localhost.localdomain kubelet.sh[2579]: I0116 20:43:49.681832 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:43:49 localhost.localdomain kubelet.sh[2579]: I0116 20:43:49.682002 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:43:49 localhost.localdomain kubelet.sh[2579]: I0116 20:43:49.682034 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:43:49 localhost.localdomain crio[2304]: time="2024-01-16 20:43:49.683601172Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=eae76d4e-6a53-414e-947c-ce3d38bcbbf9 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:43:49 localhost.localdomain crio[2304]: time="2024-01-16 20:43:49.693126827Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=eae76d4e-6a53-414e-947c-ce3d38bcbbf9 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:43:49 localhost.localdomain kubelet.sh[2579]: I0116 20:43:49.694146 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:43:49 localhost.localdomain kubelet.sh[2579]: I0116 20:43:49.697353 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:43:49 localhost.localdomain kubelet.sh[2579]: I0116 20:43:49.697397 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:43:49 localhost.localdomain kubelet.sh[2579]: I0116 20:43:49.697424 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:43:49 localhost.localdomain crio[2304]: time="2024-01-16 20:43:49.698268142Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=cf2e321b-3d84-43a1-8824-f73d46797232 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:43:49 localhost.localdomain crio[2304]: time="2024-01-16 20:43:49.704019201Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=cf2e321b-3d84-43a1-8824-f73d46797232 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:43:49 localhost.localdomain crio[2304]: time="2024-01-16 20:43:49.713698711Z" level=info msg="Creating container: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/kube-apiserver" id=909b3834-96a4-4b11-83ec-d1d283801435 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:43:49 localhost.localdomain crio[2304]: time="2024-01-16 20:43:49.714243023Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:43:49 localhost.localdomain crio[2304]: time="2024-01-16 20:43:49.771150070Z" level=info msg="Removed container c3ec8cfb2e6a164cb42f2f498f8588fcd9acb5c4332052283a291e5cfc99bc65: openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain/cluster-version-operator" id=992d1026-f6ef-4284-b221-00b6eb65de45 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 20:43:49 localhost.localdomain systemd[1]: Started crio-conmon-ebef89d4391dc8ba547c26d463e7c42c9984ebc5ca069457fbf4d549313cbca5.scope. Jan 16 20:43:50 localhost.localdomain systemd[1]: run-runc-ebef89d4391dc8ba547c26d463e7c42c9984ebc5ca069457fbf4d549313cbca5-runc.gwhIVS.mount: Deactivated successfully. Jan 16 20:43:50 localhost.localdomain systemd[1]: Started libcontainer container ebef89d4391dc8ba547c26d463e7c42c9984ebc5ca069457fbf4d549313cbca5. Jan 16 20:43:50 localhost.localdomain crio[2304]: time="2024-01-16 20:43:50.167782469Z" level=info msg="Created container ebef89d4391dc8ba547c26d463e7c42c9984ebc5ca069457fbf4d549313cbca5: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/kube-apiserver" id=909b3834-96a4-4b11-83ec-d1d283801435 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:43:50 localhost.localdomain crio[2304]: time="2024-01-16 20:43:50.170595322Z" level=info msg="Starting container: ebef89d4391dc8ba547c26d463e7c42c9984ebc5ca069457fbf4d549313cbca5" id=0049d23e-3758-48b0-a24e-4108fe69afd1 name=/runtime.v1.RuntimeService/StartContainer Jan 16 20:43:50 localhost.localdomain crio[2304]: time="2024-01-16 20:43:50.198276443Z" level=info msg="Started container" PID=8437 containerID=ebef89d4391dc8ba547c26d463e7c42c9984ebc5ca069457fbf4d549313cbca5 description=openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/kube-apiserver id=0049d23e-3758-48b0-a24e-4108fe69afd1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ad4d9c6ed5d6ab8a2a9b57904014b7c602165d8b2f56808bf0a162e61ca5e05d Jan 16 20:43:50 localhost.localdomain crio[2304]: time="2024-01-16 20:43:50.226547050Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c074b99f606a6eba6b937f3d96115ec5790b747f6c0b6f6eed01e4f1a3a189eb" id=55d51814-f5eb-495f-9b72-f017f3b95744 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:43:50 localhost.localdomain crio[2304]: time="2024-01-16 20:43:50.227095973Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba904bf53d6c9cd58209eebeead820a9fc257a3eef7e2301313cd33072c494dd,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c074b99f606a6eba6b937f3d96115ec5790b747f6c0b6f6eed01e4f1a3a189eb],Size_:546075839,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=55d51814-f5eb-495f-9b72-f017f3b95744 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:43:50 localhost.localdomain crio[2304]: time="2024-01-16 20:43:50.228043779Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c074b99f606a6eba6b937f3d96115ec5790b747f6c0b6f6eed01e4f1a3a189eb" id=2bd3b883-c85f-4014-8e73-714c9548b251 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:43:50 localhost.localdomain crio[2304]: time="2024-01-16 20:43:50.228429639Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba904bf53d6c9cd58209eebeead820a9fc257a3eef7e2301313cd33072c494dd,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c074b99f606a6eba6b937f3d96115ec5790b747f6c0b6f6eed01e4f1a3a189eb],Size_:546075839,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=2bd3b883-c85f-4014-8e73-714c9548b251 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:43:50 localhost.localdomain crio[2304]: time="2024-01-16 20:43:50.230342470Z" level=info msg="Creating container: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/kube-apiserver-insecure-readyz" id=5579db7f-370c-4326-8eed-25cbe51cf9c5 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:43:50 localhost.localdomain crio[2304]: time="2024-01-16 20:43:50.230654875Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:43:50 localhost.localdomain systemd[1]: Started crio-conmon-3d00b24ede439b8dfa7eb78e218c327ae1bbe9f96719ea8096087e7a0a2f3023.scope. Jan 16 20:43:50 localhost.localdomain systemd[1]: Started libcontainer container 3d00b24ede439b8dfa7eb78e218c327ae1bbe9f96719ea8096087e7a0a2f3023. Jan 16 20:43:50 localhost.localdomain kubelet.sh[2579]: I0116 20:43:50.685644 2579 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_bootstrap-cluster-version-operator-localhost.localdomain_05c96ce8daffad47cf2b15e2a67753ec/cluster-version-operator/1.log" Jan 16 20:43:50 localhost.localdomain kubelet.sh[2579]: I0116 20:43:50.690613 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:43:50 localhost.localdomain kubelet.sh[2579]: I0116 20:43:50.691200 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" event=&{ID:1cb3be1f2df5273e9b77f7050777bcbe Type:ContainerStarted Data:ebef89d4391dc8ba547c26d463e7c42c9984ebc5ca069457fbf4d549313cbca5} Jan 16 20:43:50 localhost.localdomain kubelet.sh[2579]: I0116 20:43:50.692875 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:43:50 localhost.localdomain kubelet.sh[2579]: I0116 20:43:50.693098 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:43:50 localhost.localdomain kubelet.sh[2579]: I0116 20:43:50.693134 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:43:50 localhost.localdomain crio[2304]: time="2024-01-16 20:43:50.728838876Z" level=info msg="Created container 3d00b24ede439b8dfa7eb78e218c327ae1bbe9f96719ea8096087e7a0a2f3023: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/kube-apiserver-insecure-readyz" id=5579db7f-370c-4326-8eed-25cbe51cf9c5 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:43:50 localhost.localdomain crio[2304]: time="2024-01-16 20:43:50.730222633Z" level=info msg="Starting container: 3d00b24ede439b8dfa7eb78e218c327ae1bbe9f96719ea8096087e7a0a2f3023" id=f81bb925-6997-4053-847d-c88fc04e5ff4 name=/runtime.v1.RuntimeService/StartContainer Jan 16 20:43:50 localhost.localdomain systemd[1]: run-runc-3d00b24ede439b8dfa7eb78e218c327ae1bbe9f96719ea8096087e7a0a2f3023-runc.vXMW90.mount: Deactivated successfully. Jan 16 20:43:50 localhost.localdomain crio[2304]: time="2024-01-16 20:43:50.781843157Z" level=info msg="Started container" PID=8481 containerID=3d00b24ede439b8dfa7eb78e218c327ae1bbe9f96719ea8096087e7a0a2f3023 description=openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/kube-apiserver-insecure-readyz id=f81bb925-6997-4053-847d-c88fc04e5ff4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ad4d9c6ed5d6ab8a2a9b57904014b7c602165d8b2f56808bf0a162e61ca5e05d Jan 16 20:43:50 localhost.localdomain crio[2304]: time="2024-01-16 20:43:50.919883523Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:384b0d1665ce12136ede1c708c4542d12eac1f788528f8bc77cb52d871057437\"" Jan 16 20:43:51 localhost.localdomain kubelet.sh[2579]: I0116 20:43:51.712827 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:43:51 localhost.localdomain kubelet.sh[2579]: I0116 20:43:51.724624 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" event=&{ID:1cb3be1f2df5273e9b77f7050777bcbe Type:ContainerStarted Data:3d00b24ede439b8dfa7eb78e218c327ae1bbe9f96719ea8096087e7a0a2f3023} Jan 16 20:43:51 localhost.localdomain kubelet.sh[2579]: I0116 20:43:51.728756 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:43:51 localhost.localdomain kubelet.sh[2579]: I0116 20:43:51.729388 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:43:51 localhost.localdomain kubelet.sh[2579]: I0116 20:43:51.729556 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:43:52 localhost.localdomain kubelet.sh[2579]: I0116 20:43:52.734653 2579 kubelet.go:2529] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 20:43:52 localhost.localdomain kubelet.sh[2579]: I0116 20:43:52.737790 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:43:52 localhost.localdomain kubelet.sh[2579]: I0116 20:43:52.743197 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:43:52 localhost.localdomain kubelet.sh[2579]: I0116 20:43:52.743396 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:43:52 localhost.localdomain kubelet.sh[2579]: I0116 20:43:52.757860 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:43:53 localhost.localdomain kubelet.sh[2579]: I0116 20:43:53.787136 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:43:53 localhost.localdomain kubelet.sh[2579]: I0116 20:43:53.814573 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:43:53 localhost.localdomain kubelet.sh[2579]: I0116 20:43:53.814907 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:43:53 localhost.localdomain kubelet.sh[2579]: I0116 20:43:53.815168 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:43:56 localhost.localdomain kubelet.sh[2579]: I0116 20:43:56.615219 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:43:56 localhost.localdomain kubelet.sh[2579]: I0116 20:43:56.617857 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:43:56 localhost.localdomain kubelet.sh[2579]: I0116 20:43:56.618034 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:43:56 localhost.localdomain kubelet.sh[2579]: I0116 20:43:56.618064 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:43:57 localhost.localdomain bootkube.sh[7556]: API is up Jan 16 20:43:57 localhost.localdomain bootkube.sh[7556]: Created "0000_00_cluster-version-operator_00_namespace.yaml" namespaces.v1./openshift-cluster-version -n Jan 16 20:43:58 localhost.localdomain bootkube.sh[7556]: Failed to create "0000_00_cluster-version-operator_01_adminack_configmap.yaml" configmaps.v1./admin-acks -n openshift-config: namespaces "openshift-config" not found Jan 16 20:43:58 localhost.localdomain bootkube.sh[7556]: Failed to create "0000_00_cluster-version-operator_01_admingate_configmap.yaml" configmaps.v1./admin-gates -n openshift-config-managed: namespaces "openshift-config-managed" not found Jan 16 20:43:58 localhost.localdomain bootkube.sh[7556]: Created "0000_00_cluster-version-operator_01_clusteroperator.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/clusteroperators.config.openshift.io -n Jan 16 20:43:58 localhost.localdomain bootkube.sh[7556]: Created "0000_00_cluster-version-operator_01_clusterversion.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/clusterversions.config.openshift.io -n Jan 16 20:43:58 localhost.localdomain bootkube.sh[7556]: Created "0000_00_cluster-version-operator_02_roles.yaml" clusterrolebindings.v1.rbac.authorization.k8s.io/cluster-version-operator -n Jan 16 20:43:58 localhost.localdomain kubelet.sh[2579]: I0116 20:43:58.465744 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:43:58 localhost.localdomain kubelet.sh[2579]: I0116 20:43:58.469409 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:43:58 localhost.localdomain kubelet.sh[2579]: I0116 20:43:58.469598 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:43:58 localhost.localdomain kubelet.sh[2579]: I0116 20:43:58.471017 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:43:58 localhost.localdomain kubelet.sh[2579]: I0116 20:43:58.567317 2579 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kni-infra/keepalived-localhost.localdomain" podUID=f3cb0bd9c64889e06acccc1066e67828 containerName="keepalived" probeResult=failure output=< Jan 16 20:43:58 localhost.localdomain kubelet.sh[2579]: /bin/bash: line 2: kill: `': not a pid or valid job spec Jan 16 20:43:58 localhost.localdomain kubelet.sh[2579]: > Jan 16 20:43:58 localhost.localdomain kubelet.sh[2579]: I0116 20:43:58.567553 2579 kubelet.go:2529] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kni-infra/keepalived-localhost.localdomain" Jan 16 20:43:58 localhost.localdomain kubelet.sh[2579]: I0116 20:43:58.568138 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:43:58 localhost.localdomain kubelet.sh[2579]: I0116 20:43:58.572377 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:43:58 localhost.localdomain kubelet.sh[2579]: I0116 20:43:58.572545 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:43:58 localhost.localdomain kubelet.sh[2579]: I0116 20:43:58.572585 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:43:58 localhost.localdomain kubelet.sh[2579]: I0116 20:43:58.575171 2579 kuberuntime_manager.go:991] "Message for Container of pod" containerName="keepalived" containerStatusID={Type:cri-o ID:4d120e2e2e8ca56246d26010aef81ab977c27638ea1c395a3bacc9d53efe0757} pod="openshift-kni-infra/keepalived-localhost.localdomain" containerMessage="Container keepalived failed liveness probe, will be restarted" Jan 16 20:43:58 localhost.localdomain kubelet.sh[2579]: I0116 20:43:58.576859 2579 kuberuntime_container.go:742] "Killing container with a grace period" pod="openshift-kni-infra/keepalived-localhost.localdomain" podUID=f3cb0bd9c64889e06acccc1066e67828 containerName="keepalived" containerID="cri-o://4d120e2e2e8ca56246d26010aef81ab977c27638ea1c395a3bacc9d53efe0757" gracePeriod=65 Jan 16 20:43:58 localhost.localdomain crio[2304]: time="2024-01-16 20:43:58.579712010Z" level=info msg="Stopping container: 4d120e2e2e8ca56246d26010aef81ab977c27638ea1c395a3bacc9d53efe0757 (timeout: 65s)" id=06b35acb-0d7b-4b3e-a4ef-ddc8edd40e41 name=/runtime.v1.RuntimeService/StopContainer Jan 16 20:43:58 localhost.localdomain systemd[1]: crio-4d120e2e2e8ca56246d26010aef81ab977c27638ea1c395a3bacc9d53efe0757.scope: Deactivated successfully. Jan 16 20:43:58 localhost.localdomain conmon[8040]: conmon 4d120e2e2e8ca56246d2 : container 8051 exited with status 143 Jan 16 20:43:58 localhost.localdomain systemd[1]: crio-conmon-4d120e2e2e8ca56246d26010aef81ab977c27638ea1c395a3bacc9d53efe0757.scope: Deactivated successfully. Jan 16 20:43:59 localhost.localdomain approve-csr.sh[8532]: No resources found Jan 16 20:43:59 localhost.localdomain kubelet.sh[2579]: I0116 20:43:59.833614 2579 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-kni-infra_keepalived-localhost.localdomain_f3cb0bd9c64889e06acccc1066e67828/keepalived/1.log" Jan 16 20:43:59 localhost.localdomain kubelet.sh[2579]: I0116 20:43:59.842126 2579 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-kni-infra_keepalived-localhost.localdomain_f3cb0bd9c64889e06acccc1066e67828/keepalived/0.log" Jan 16 20:43:59 localhost.localdomain kubelet.sh[2579]: I0116 20:43:59.842566 2579 generic.go:334] "Generic (PLEG): container finished" podID=f3cb0bd9c64889e06acccc1066e67828 containerID="4d120e2e2e8ca56246d26010aef81ab977c27638ea1c395a3bacc9d53efe0757" exitCode=143 Jan 16 20:43:59 localhost.localdomain kubelet.sh[2579]: I0116 20:43:59.842676 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-kni-infra/keepalived-localhost.localdomain" event=&{ID:f3cb0bd9c64889e06acccc1066e67828 Type:ContainerDied Data:4d120e2e2e8ca56246d26010aef81ab977c27638ea1c395a3bacc9d53efe0757} Jan 16 20:43:59 localhost.localdomain kubelet.sh[2579]: I0116 20:43:59.845065 2579 scope.go:115] "RemoveContainer" containerID="d13d10ac5144cf00fa2285e8d563c64157c650aab6cb5bb4e9d90ec7528edcc8" Jan 16 20:43:59 localhost.localdomain crio[2304]: time="2024-01-16 20:43:59.859572347Z" level=info msg="Removing container: d13d10ac5144cf00fa2285e8d563c64157c650aab6cb5bb4e9d90ec7528edcc8" id=844c9897-0d45-436e-b51d-5410860c4886 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 20:43:59 localhost.localdomain systemd[1]: var-lib-containers-storage-overlay-fb12753973ae7f1a0d3603877e49b0e8fb7776fc92f9c707cf76a0179543ef04-merged.mount: Deactivated successfully. Jan 16 20:43:59 localhost.localdomain crio[2304]: time="2024-01-16 20:43:59.935413083Z" level=info msg="Stopped container 4d120e2e2e8ca56246d26010aef81ab977c27638ea1c395a3bacc9d53efe0757: openshift-kni-infra/keepalived-localhost.localdomain/keepalived" id=06b35acb-0d7b-4b3e-a4ef-ddc8edd40e41 name=/runtime.v1.RuntimeService/StopContainer Jan 16 20:43:59 localhost.localdomain crio[2304]: time="2024-01-16 20:43:59.938329769Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:594f2e4ef75bf8bfd342670ddd1d50bd97888671f13b8b566af8c568285de689" id=3fb2525a-ced4-4c23-8716-3f56ecb76940 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:43:59 localhost.localdomain crio[2304]: time="2024-01-16 20:43:59.938902095Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:5cb5dd5856f0cbd66a3227a48d327384ad2ba615d2e9f2313428232427b8aeb7,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:594f2e4ef75bf8bfd342670ddd1d50bd97888671f13b8b566af8c568285de689],Size_:537687465,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=3fb2525a-ced4-4c23-8716-3f56ecb76940 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:43:59 localhost.localdomain crio[2304]: time="2024-01-16 20:43:59.941055600Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:594f2e4ef75bf8bfd342670ddd1d50bd97888671f13b8b566af8c568285de689" id=fe1a2139-5f08-468e-88e9-dd3cb4621fe8 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:43:59 localhost.localdomain crio[2304]: time="2024-01-16 20:43:59.941532206Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:5cb5dd5856f0cbd66a3227a48d327384ad2ba615d2e9f2313428232427b8aeb7,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:594f2e4ef75bf8bfd342670ddd1d50bd97888671f13b8b566af8c568285de689],Size_:537687465,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=fe1a2139-5f08-468e-88e9-dd3cb4621fe8 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:43:59 localhost.localdomain crio[2304]: time="2024-01-16 20:43:59.950156001Z" level=info msg="Creating container: openshift-kni-infra/keepalived-localhost.localdomain/keepalived" id=f1d692f8-b859-4cb5-bec9-9b030fb59cb2 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:43:59 localhost.localdomain crio[2304]: time="2024-01-16 20:43:59.950616178Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:44:00 localhost.localdomain systemd[1]: Started crio-conmon-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d.scope. Jan 16 20:44:00 localhost.localdomain crio[2304]: time="2024-01-16 20:44:00.074655124Z" level=info msg="Removed container d13d10ac5144cf00fa2285e8d563c64157c650aab6cb5bb4e9d90ec7528edcc8: openshift-kni-infra/keepalived-localhost.localdomain/keepalived" id=844c9897-0d45-436e-b51d-5410860c4886 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 20:44:00 localhost.localdomain systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.mqEjFZ.mount: Deactivated successfully. Jan 16 20:44:00 localhost.localdomain crio[2304]: time="2024-01-16 20:44:00.116730161Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:384b0d1665ce12136ede1c708c4542d12eac1f788528f8bc77cb52d871057437" id=c33a50fc-b0db-4e26-9739-30448d884a29 name=/runtime.v1.ImageService/PullImage Jan 16 20:44:00 localhost.localdomain crio[2304]: time="2024-01-16 20:44:00.120880299Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:384b0d1665ce12136ede1c708c4542d12eac1f788528f8bc77cb52d871057437" id=d9c9efc1-0410-4d1a-aa84-c706799c2cff name=/runtime.v1.ImageService/ImageStatus Jan 16 20:44:00 localhost.localdomain systemd[1]: Started libcontainer container c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d. Jan 16 20:44:00 localhost.localdomain crio[2304]: time="2024-01-16 20:44:00.138266750Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c6ce09d75120c7c75b95c587ffc4a7a3f18cc099961eab2583e449102365e5b0,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:384b0d1665ce12136ede1c708c4542d12eac1f788528f8bc77cb52d871057437],Size_:535546139,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=d9c9efc1-0410-4d1a-aa84-c706799c2cff name=/runtime.v1.ImageService/ImageStatus Jan 16 20:44:00 localhost.localdomain crio[2304]: time="2024-01-16 20:44:00.143885728Z" level=info msg="Creating container: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/cluster-policy-controller" id=4c96d648-7a39-48df-9faa-50827d2dc997 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:44:00 localhost.localdomain crio[2304]: time="2024-01-16 20:44:00.144196166Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:44:00 localhost.localdomain crio[2304]: time="2024-01-16 20:44:00.291062439Z" level=info msg="Created container c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d: openshift-kni-infra/keepalived-localhost.localdomain/keepalived" id=f1d692f8-b859-4cb5-bec9-9b030fb59cb2 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:44:00 localhost.localdomain crio[2304]: time="2024-01-16 20:44:00.293120843Z" level=info msg="Starting container: c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d" id=9eb67223-f778-4d7b-be8f-865f50e2d2e0 name=/runtime.v1.RuntimeService/StartContainer Jan 16 20:44:00 localhost.localdomain crio[2304]: time="2024-01-16 20:44:00.334320449Z" level=info msg="Started container" PID=8616 containerID=c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d description=openshift-kni-infra/keepalived-localhost.localdomain/keepalived id=9eb67223-f778-4d7b-be8f-865f50e2d2e0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7ebdc370e2c6148b8fcf32f4fc2cc95081bf61cd8d6252b3c4013c6ed54602ca Jan 16 20:44:00 localhost.localdomain systemd[1]: Started crio-conmon-d0175af05ba73d648c2b3062a202d575bed3916b71d96d4a4e25e90ec8b9fcb3.scope. Jan 16 20:44:00 localhost.localdomain systemd[1]: Started libcontainer container d0175af05ba73d648c2b3062a202d575bed3916b71d96d4a4e25e90ec8b9fcb3. Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: I0116 20:44:00.631657 2579 patch_prober.go:28] interesting pod/bootstrap-kube-apiserver-localhost.localdomain container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]log ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]etcd ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]etcd-readiness ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]api-openshift-apiserver-available ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]api-openshift-oauth-apiserver-available ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]informer-sync ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/start-kube-apiserver-admission-initializer ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/generic-apiserver-start-informers ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/priority-and-fairness-filter ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/start-apiextensions-informers ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/start-apiextensions-controllers ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/crd-informer-synced ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/start-system-namespaces-controller ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/bootstrap-controller ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/start-kube-aggregator-informers ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/apiservice-registration-controller ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/apiservice-status-available-controller ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]autoregister-completion ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/apiservice-openapi-controller ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/apiservice-discovery-controller ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: [+]shutdown ok Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: readyz check failed Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: I0116 20:44:00.631821 2579 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" podUID=1cb3be1f2df5273e9b77f7050777bcbe containerName="kube-apiserver" probeResult=failure output="HTTP probe failed with statuscode: 500" Jan 16 20:44:00 localhost.localdomain crio[2304]: time="2024-01-16 20:44:00.689171759Z" level=info msg="Created container d0175af05ba73d648c2b3062a202d575bed3916b71d96d4a4e25e90ec8b9fcb3: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/cluster-policy-controller" id=4c96d648-7a39-48df-9faa-50827d2dc997 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:44:00 localhost.localdomain crio[2304]: time="2024-01-16 20:44:00.690719833Z" level=info msg="Starting container: d0175af05ba73d648c2b3062a202d575bed3916b71d96d4a4e25e90ec8b9fcb3" id=bf826fe4-6180-4e1f-9869-65b6ff8194b8 name=/runtime.v1.RuntimeService/StartContainer Jan 16 20:44:00 localhost.localdomain crio[2304]: time="2024-01-16 20:44:00.717075739Z" level=info msg="Started container" PID=8654 containerID=d0175af05ba73d648c2b3062a202d575bed3916b71d96d4a4e25e90ec8b9fcb3 description=kube-system/bootstrap-kube-controller-manager-localhost.localdomain/cluster-policy-controller id=bf826fe4-6180-4e1f-9869-65b6ff8194b8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=79c10015fd162b8e62ecb33ebeccbd5e476b9a518fb7eb7c00b519d5bb0eb934 Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: I0116 20:44:00.852574 2579 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-kni-infra_keepalived-localhost.localdomain_f3cb0bd9c64889e06acccc1066e67828/keepalived/1.log" Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: I0116 20:44:00.854601 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-kni-infra/keepalived-localhost.localdomain" event=&{ID:f3cb0bd9c64889e06acccc1066e67828 Type:ContainerStarted Data:c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d} Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: I0116 20:44:00.855097 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: I0116 20:44:00.858282 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: I0116 20:44:00.858394 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: I0116 20:44:00.858475 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: I0116 20:44:00.866809 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" event=&{ID:c3db590e56a311b869092b2d6b1724e5 Type:ContainerStarted Data:d0175af05ba73d648c2b3062a202d575bed3916b71d96d4a4e25e90ec8b9fcb3} Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: I0116 20:44:00.867242 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: I0116 20:44:00.869538 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: I0116 20:44:00.869628 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:44:00 localhost.localdomain kubelet.sh[2579]: I0116 20:44:00.869654 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:44:01 localhost.localdomain kubelet.sh[2579]: I0116 20:44:01.466222 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:44:01 localhost.localdomain kubelet.sh[2579]: I0116 20:44:01.473613 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:44:01 localhost.localdomain kubelet.sh[2579]: I0116 20:44:01.473891 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:44:01 localhost.localdomain kubelet.sh[2579]: I0116 20:44:01.474119 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:44:01 localhost.localdomain kubelet.sh[2579]: I0116 20:44:01.877028 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:44:01 localhost.localdomain kubelet.sh[2579]: I0116 20:44:01.877330 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:44:01 localhost.localdomain kubelet.sh[2579]: I0116 20:44:01.881526 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:44:01 localhost.localdomain kubelet.sh[2579]: I0116 20:44:01.881627 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:44:01 localhost.localdomain kubelet.sh[2579]: I0116 20:44:01.881657 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:44:01 localhost.localdomain kubelet.sh[2579]: I0116 20:44:01.889139 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:44:01 localhost.localdomain kubelet.sh[2579]: I0116 20:44:01.889231 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:44:01 localhost.localdomain kubelet.sh[2579]: I0116 20:44:01.889259 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:44:01 localhost.localdomain master-bmh-update.sh[8687]: error: the server doesn't have a resource type "baremetalhosts" Jan 16 20:44:01 localhost.localdomain master-bmh-update.sh[6528]: Waiting for BareMetalHosts to appear... Jan 16 20:44:02 localhost.localdomain kubelet.sh[2579]: I0116 20:44:02.466719 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:44:02 localhost.localdomain kubelet.sh[2579]: I0116 20:44:02.472123 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:44:02 localhost.localdomain kubelet.sh[2579]: I0116 20:44:02.472294 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:44:02 localhost.localdomain kubelet.sh[2579]: I0116 20:44:02.472335 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:44:02 localhost.localdomain kubelet.sh[2579]: I0116 20:44:02.472686 2579 scope.go:115] "RemoveContainer" containerID="90a34620cf7fa31e2700acd6399c77d91e517493b7a2e628fda8f544e7a0b88d" Jan 16 20:44:02 localhost.localdomain crio[2304]: time="2024-01-16 20:44:02.474766367Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa" id=2d96d36f-3ff3-440e-a348-4e178bfc0148 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:44:02 localhost.localdomain crio[2304]: time="2024-01-16 20:44:02.475219391Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:40e15091a793905eb63a02d951105fc5c5904bfb294f8004c052ac950c9ac44a,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa],Size_:522846560,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=2d96d36f-3ff3-440e-a348-4e178bfc0148 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:44:02 localhost.localdomain crio[2304]: time="2024-01-16 20:44:02.477285331Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa" id=bd6533e6-bcb2-437f-80c0-4b528736fab7 name=/runtime.v1.ImageService/PullImage Jan 16 20:44:02 localhost.localdomain crio[2304]: time="2024-01-16 20:44:02.482552342Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa\"" Jan 16 20:44:04 localhost.localdomain crio[2304]: time="2024-01-16 20:44:04.624548141Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa" id=bd6533e6-bcb2-437f-80c0-4b528736fab7 name=/runtime.v1.ImageService/PullImage Jan 16 20:44:04 localhost.localdomain crio[2304]: time="2024-01-16 20:44:04.628556150Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa" id=c3275e51-88f9-456f-964c-a2ce41601a2a name=/runtime.v1.ImageService/ImageStatus Jan 16 20:44:04 localhost.localdomain crio[2304]: time="2024-01-16 20:44:04.629330829Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:40e15091a793905eb63a02d951105fc5c5904bfb294f8004c052ac950c9ac44a,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa],Size_:522846560,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=c3275e51-88f9-456f-964c-a2ce41601a2a name=/runtime.v1.ImageService/ImageStatus Jan 16 20:44:04 localhost.localdomain crio[2304]: time="2024-01-16 20:44:04.633354392Z" level=info msg="Creating container: openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain/cluster-version-operator" id=9d9d88de-91c2-46e4-bc20-f6ddff9a3d52 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:44:04 localhost.localdomain crio[2304]: time="2024-01-16 20:44:04.634154332Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:44:05 localhost.localdomain systemd[1]: Started crio-conmon-f76c54ce310b345591d6fea03791bc15e056e84296e47c1c755a0852c10f2981.scope. Jan 16 20:44:05 localhost.localdomain systemd[1]: Started libcontainer container f76c54ce310b345591d6fea03791bc15e056e84296e47c1c755a0852c10f2981. Jan 16 20:44:05 localhost.localdomain crio[2304]: time="2024-01-16 20:44:05.493903238Z" level=info msg="Created container f76c54ce310b345591d6fea03791bc15e056e84296e47c1c755a0852c10f2981: openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain/cluster-version-operator" id=9d9d88de-91c2-46e4-bc20-f6ddff9a3d52 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:44:05 localhost.localdomain crio[2304]: time="2024-01-16 20:44:05.496542292Z" level=info msg="Starting container: f76c54ce310b345591d6fea03791bc15e056e84296e47c1c755a0852c10f2981" id=870777c8-2fb4-42a8-a6fa-d7a2cb43e6c7 name=/runtime.v1.RuntimeService/StartContainer Jan 16 20:44:05 localhost.localdomain crio[2304]: time="2024-01-16 20:44:05.574758615Z" level=info msg="Started container" PID=8716 containerID=f76c54ce310b345591d6fea03791bc15e056e84296e47c1c755a0852c10f2981 description=openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain/cluster-version-operator id=870777c8-2fb4-42a8-a6fa-d7a2cb43e6c7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=70686be8a2d87683a00828f4233d059638689db262cbef7d341c1f46aeb3fd09 Jan 16 20:44:05 localhost.localdomain kubelet.sh[2579]: I0116 20:44:05.922700 2579 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_bootstrap-cluster-version-operator-localhost.localdomain_05c96ce8daffad47cf2b15e2a67753ec/cluster-version-operator/1.log" Jan 16 20:44:05 localhost.localdomain kubelet.sh[2579]: I0116 20:44:05.923162 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" event=&{ID:05c96ce8daffad47cf2b15e2a67753ec Type:ContainerStarted Data:f76c54ce310b345591d6fea03791bc15e056e84296e47c1c755a0852c10f2981} Jan 16 20:44:05 localhost.localdomain kubelet.sh[2579]: I0116 20:44:05.924293 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:44:05 localhost.localdomain kubelet.sh[2579]: I0116 20:44:05.932869 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:44:05 localhost.localdomain kubelet.sh[2579]: I0116 20:44:05.933728 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:44:05 localhost.localdomain kubelet.sh[2579]: I0116 20:44:05.934282 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:44:06 localhost.localdomain kubelet.sh[2579]: I0116 20:44:06.675890 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:44:06 localhost.localdomain kubelet.sh[2579]: I0116 20:44:06.681045 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:44:06 localhost.localdomain kubelet.sh[2579]: I0116 20:44:06.681401 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:44:06 localhost.localdomain kubelet.sh[2579]: I0116 20:44:06.681545 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:44:07 localhost.localdomain systemd[1]: crio-27f0a69b5a1170662ecdcb22b60df84ee82dc8b43f39e64495dfc15c1553e58b.scope: Deactivated successfully. Jan 16 20:44:07 localhost.localdomain systemd[1]: crio-27f0a69b5a1170662ecdcb22b60df84ee82dc8b43f39e64495dfc15c1553e58b.scope: Consumed 1.888s CPU time. Jan 16 20:44:07 localhost.localdomain conmon[7580]: conmon 27f0a69b5a1170662ecd : container 7603 exited with status 1 Jan 16 20:44:07 localhost.localdomain conmon[7580]: conmon 27f0a69b5a1170662ecd : Failed to open cgroups file: /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3cb0bd9c64889e06acccc1066e67828.slice/crio-27f0a69b5a1170662ecdcb22b60df84ee82dc8b43f39e64495dfc15c1553e58b.scope/memory.events Jan 16 20:44:07 localhost.localdomain systemd[1]: crio-conmon-27f0a69b5a1170662ecdcb22b60df84ee82dc8b43f39e64495dfc15c1553e58b.scope: Deactivated successfully. Jan 16 20:44:08 localhost.localdomain conmon[8249]: conmon 14037eeba10a1b747479 : container 8289 exited with status 1 Jan 16 20:44:08 localhost.localdomain systemd[1]: crio-14037eeba10a1b747479911dd868e3167adaad0a3361b3f5be818e4a800280dc.scope: Deactivated successfully. Jan 16 20:44:08 localhost.localdomain systemd[1]: crio-14037eeba10a1b747479911dd868e3167adaad0a3361b3f5be818e4a800280dc.scope: Consumed 2.573s CPU time. Jan 16 20:44:08 localhost.localdomain systemd[1]: crio-conmon-14037eeba10a1b747479911dd868e3167adaad0a3361b3f5be818e4a800280dc.scope: Deactivated successfully. Jan 16 20:44:08 localhost.localdomain bootkube.sh[7556]: Failed to create "0000_00_cluster-version-operator_03_deployment.yaml" deployments.v1.apps/cluster-version-operator -n openshift-cluster-version: deployments.apps "cluster-version-operator" is forbidden: quota.openshift.io/ClusterResourceQuota: caches not synchronized Jan 16 20:44:08 localhost.localdomain bootkube.sh[7556]: Created "0000_00_namespace-openshift-infra.yaml" namespaces.v1./openshift-infra -n Jan 16 20:44:08 localhost.localdomain bootkube.sh[7556]: Created "0000_03_authorization-openshift_01_rolebindingrestriction.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/rolebindingrestrictions.authorization.openshift.io -n Jan 16 20:44:08 localhost.localdomain bootkube.sh[7556]: Created "0000_03_config-operator_01_proxy.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/proxies.config.openshift.io -n Jan 16 20:44:08 localhost.localdomain bootkube.sh[7556]: Created "0000_03_quota-openshift_01_clusterresourcequota.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/clusterresourcequotas.quota.openshift.io -n Jan 16 20:44:08 localhost.localdomain bootkube.sh[7556]: Created "0000_03_security-openshift_01_scc.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/securitycontextconstraints.security.openshift.io -n Jan 16 20:44:08 localhost.localdomain kubelet.sh[2579]: I0116 20:44:08.986592 2579 generic.go:334] "Generic (PLEG): container finished" podID=c3db590e56a311b869092b2d6b1724e5 containerID="14037eeba10a1b747479911dd868e3167adaad0a3361b3f5be818e4a800280dc" exitCode=1 Jan 16 20:44:08 localhost.localdomain kubelet.sh[2579]: I0116 20:44:08.987070 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" event=&{ID:c3db590e56a311b869092b2d6b1724e5 Type:ContainerDied Data:14037eeba10a1b747479911dd868e3167adaad0a3361b3f5be818e4a800280dc} Jan 16 20:44:08 localhost.localdomain kubelet.sh[2579]: I0116 20:44:08.988648 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:44:08 localhost.localdomain kubelet.sh[2579]: I0116 20:44:08.993228 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:44:08 localhost.localdomain kubelet.sh[2579]: I0116 20:44:08.993520 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:44:08 localhost.localdomain kubelet.sh[2579]: I0116 20:44:08.993574 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:44:08 localhost.localdomain kubelet.sh[2579]: I0116 20:44:08.993778 2579 scope.go:115] "RemoveContainer" containerID="14037eeba10a1b747479911dd868e3167adaad0a3361b3f5be818e4a800280dc" Jan 16 20:44:08 localhost.localdomain crio[2304]: time="2024-01-16 20:44:08.998356539Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=d1d5724e-7803-4590-914c-eeb70b0349d1 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:44:08 localhost.localdomain crio[2304]: time="2024-01-16 20:44:08.999199316Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=d1d5724e-7803-4590-914c-eeb70b0349d1 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:44:09 localhost.localdomain crio[2304]: time="2024-01-16 20:44:09.001371731Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=a76c40ef-8f9e-4f8e-9a83-56f011693713 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:44:09 localhost.localdomain crio[2304]: time="2024-01-16 20:44:09.001847421Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=a76c40ef-8f9e-4f8e-9a83-56f011693713 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:44:09 localhost.localdomain kubelet.sh[2579]: I0116 20:44:09.003535 2579 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-kni-infra_keepalived-localhost.localdomain_f3cb0bd9c64889e06acccc1066e67828/keepalived/1.log" Jan 16 20:44:09 localhost.localdomain kubelet.sh[2579]: I0116 20:44:09.005249 2579 generic.go:334] "Generic (PLEG): container finished" podID=f3cb0bd9c64889e06acccc1066e67828 containerID="27f0a69b5a1170662ecdcb22b60df84ee82dc8b43f39e64495dfc15c1553e58b" exitCode=1 Jan 16 20:44:09 localhost.localdomain kubelet.sh[2579]: I0116 20:44:09.005895 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-kni-infra/keepalived-localhost.localdomain" event=&{ID:f3cb0bd9c64889e06acccc1066e67828 Type:ContainerDied Data:27f0a69b5a1170662ecdcb22b60df84ee82dc8b43f39e64495dfc15c1553e58b} Jan 16 20:44:09 localhost.localdomain kubelet.sh[2579]: I0116 20:44:09.007555 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:44:09 localhost.localdomain crio[2304]: time="2024-01-16 20:44:09.007604812Z" level=info msg="Creating container: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/kube-controller-manager" id=c7ece4ee-9f54-4167-86b3-0f5fcf871b59 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:44:09 localhost.localdomain crio[2304]: time="2024-01-16 20:44:09.008504516Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:44:09 localhost.localdomain kubelet.sh[2579]: I0116 20:44:09.016240 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:44:09 localhost.localdomain kubelet.sh[2579]: I0116 20:44:09.016557 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:44:09 localhost.localdomain kubelet.sh[2579]: I0116 20:44:09.016607 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:44:09 localhost.localdomain kubelet.sh[2579]: I0116 20:44:09.017731 2579 scope.go:115] "RemoveContainer" containerID="27f0a69b5a1170662ecdcb22b60df84ee82dc8b43f39e64495dfc15c1553e58b" Jan 16 20:44:09 localhost.localdomain crio[2304]: time="2024-01-16 20:44:09.019770765Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b233c7a0c0a218322c5d2fd5d17dc21db914bd49e84f46dd53aec042eb77d39d" id=f8f6e895-776e-484f-b197-5c1267cf9a90 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:44:09 localhost.localdomain crio[2304]: time="2024-01-16 20:44:09.021384040Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a86afd22a7cf3d4ab5bad64f333a5759eaa087500f4642d2edc18a59b1bdbdd9,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b233c7a0c0a218322c5d2fd5d17dc21db914bd49e84f46dd53aec042eb77d39d],Size_:759621966,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=f8f6e895-776e-484f-b197-5c1267cf9a90 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:44:09 localhost.localdomain crio[2304]: time="2024-01-16 20:44:09.023620329Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b233c7a0c0a218322c5d2fd5d17dc21db914bd49e84f46dd53aec042eb77d39d" id=d17b63d5-bdb9-4a52-aa3a-a145fc51a0e3 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:44:09 localhost.localdomain crio[2304]: time="2024-01-16 20:44:09.024194568Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a86afd22a7cf3d4ab5bad64f333a5759eaa087500f4642d2edc18a59b1bdbdd9,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b233c7a0c0a218322c5d2fd5d17dc21db914bd49e84f46dd53aec042eb77d39d],Size_:759621966,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=d17b63d5-bdb9-4a52-aa3a-a145fc51a0e3 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:44:09 localhost.localdomain crio[2304]: time="2024-01-16 20:44:09.026753158Z" level=info msg="Creating container: openshift-kni-infra/keepalived-localhost.localdomain/keepalived-monitor" id=e2fd13df-c24c-45b6-a3a7-5ff16a970268 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:44:09 localhost.localdomain crio[2304]: time="2024-01-16 20:44:09.043330353Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 20:44:09 localhost.localdomain bootkube.sh[7556]: Created "0000_03_securityinternal-openshift_02_rangeallocation.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/rangeallocations.security.internal.openshift.io -n Jan 16 20:44:09 localhost.localdomain bootkube.sh[7556]: Created "0000_10_config-operator_01_apiserver-Default.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/apiservers.config.openshift.io -n Jan 16 20:44:09 localhost.localdomain systemd[1]: Started crio-conmon-f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2.scope. Jan 16 20:44:09 localhost.localdomain systemd[1]: Started crio-conmon-302fe19bfd2d338626baab9fe4dce5c19d2b69e0a415908d469a91feff61fe6d.scope. Jan 16 20:44:09 localhost.localdomain systemd[1]: Started libcontainer container f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2. Jan 16 20:44:09 localhost.localdomain bootkube.sh[7556]: Created "0000_10_config-operator_01_authentication.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/authentications.config.openshift.io -n Jan 16 20:44:09 localhost.localdomain systemd[1]: Started libcontainer container 302fe19bfd2d338626baab9fe4dce5c19d2b69e0a415908d469a91feff61fe6d. Jan 16 20:44:10 localhost.localdomain crio[2304]: time="2024-01-16 20:44:10.051531023Z" level=info msg="Created container f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/kube-controller-manager" id=c7ece4ee-9f54-4167-86b3-0f5fcf871b59 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:44:10 localhost.localdomain crio[2304]: time="2024-01-16 20:44:10.054495116Z" level=info msg="Starting container: f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2" id=24344b97-5060-47c8-b1df-e9601143f797 name=/runtime.v1.RuntimeService/StartContainer Jan 16 20:44:10 localhost.localdomain crio[2304]: time="2024-01-16 20:44:10.089352705Z" level=info msg="Started container" PID=8803 containerID=f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2 description=kube-system/bootstrap-kube-controller-manager-localhost.localdomain/kube-controller-manager id=24344b97-5060-47c8-b1df-e9601143f797 name=/runtime.v1.RuntimeService/StartContainer sandboxID=79c10015fd162b8e62ecb33ebeccbd5e476b9a518fb7eb7c00b519d5bb0eb934 Jan 16 20:44:10 localhost.localdomain crio[2304]: time="2024-01-16 20:44:10.118014222Z" level=info msg="Created container 302fe19bfd2d338626baab9fe4dce5c19d2b69e0a415908d469a91feff61fe6d: openshift-kni-infra/keepalived-localhost.localdomain/keepalived-monitor" id=e2fd13df-c24c-45b6-a3a7-5ff16a970268 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 20:44:10 localhost.localdomain crio[2304]: time="2024-01-16 20:44:10.119521151Z" level=info msg="Starting container: 302fe19bfd2d338626baab9fe4dce5c19d2b69e0a415908d469a91feff61fe6d" id=87de075d-c6af-47ce-bcef-2c687d00591f name=/runtime.v1.RuntimeService/StartContainer Jan 16 20:44:10 localhost.localdomain crio[2304]: time="2024-01-16 20:44:10.164561116Z" level=info msg="Started container" PID=8813 containerID=302fe19bfd2d338626baab9fe4dce5c19d2b69e0a415908d469a91feff61fe6d description=openshift-kni-infra/keepalived-localhost.localdomain/keepalived-monitor id=87de075d-c6af-47ce-bcef-2c687d00591f name=/runtime.v1.RuntimeService/StartContainer sandboxID=7ebdc370e2c6148b8fcf32f4fc2cc95081bf61cd8d6252b3c4013c6ed54602ca Jan 16 20:44:10 localhost.localdomain bootkube.sh[7556]: Created "0000_10_config-operator_01_console.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/consoles.config.openshift.io -n Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: I0116 20:44:10.598190 2579 patch_prober.go:28] interesting pod/bootstrap-kube-apiserver-localhost.localdomain container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]log ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]etcd ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]etcd-readiness ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]api-openshift-apiserver-available ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]api-openshift-oauth-apiserver-available ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]informer-sync ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/start-kube-apiserver-admission-initializer ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/generic-apiserver-start-informers ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/priority-and-fairness-filter ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/start-apiextensions-informers ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/start-apiextensions-controllers ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/crd-informer-synced ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/start-system-namespaces-controller ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/bootstrap-controller ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/start-kube-aggregator-informers ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/apiservice-registration-controller ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/apiservice-status-available-controller ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]autoregister-completion ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/apiservice-openapi-controller ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]poststarthook/apiservice-discovery-controller ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: [+]shutdown ok Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: readyz check failed Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: I0116 20:44:10.598360 2579 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" podUID=1cb3be1f2df5273e9b77f7050777bcbe containerName="kube-apiserver" probeResult=failure output="HTTP probe failed with statuscode: 500" Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: I0116 20:44:10.608528 2579 kubelet.go:2529] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: I0116 20:44:10.608821 2579 kubelet.go:2529] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: I0116 20:44:10.608880 2579 kubelet.go:2529] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: I0116 20:44:10.609015 2579 kubelet.go:2529] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: I0116 20:44:10.609111 2579 kubelet.go:2529] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: I0116 20:44:10.609147 2579 kubelet.go:2529] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 20:44:10 localhost.localdomain kubelet.sh[2579]: I0116 20:44:10.621313 2579 kubelet.go:2529] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 20:44:10 localhost.localdomain bootkube.sh[7556]: Created "0000_10_config-operator_01_dns-Default.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/dnses.config.openshift.io -n Jan 16 20:44:11 localhost.localdomain kubelet.sh[2579]: I0116 20:44:11.031148 2579 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-kni-infra_keepalived-localhost.localdomain_f3cb0bd9c64889e06acccc1066e67828/keepalived/1.log" Jan 16 20:44:11 localhost.localdomain kubelet.sh[2579]: I0116 20:44:11.032001 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-kni-infra/keepalived-localhost.localdomain" event=&{ID:f3cb0bd9c64889e06acccc1066e67828 Type:ContainerStarted Data:302fe19bfd2d338626baab9fe4dce5c19d2b69e0a415908d469a91feff61fe6d} Jan 16 20:44:11 localhost.localdomain kubelet.sh[2579]: I0116 20:44:11.032390 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:44:11 localhost.localdomain kubelet.sh[2579]: I0116 20:44:11.035222 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:44:11 localhost.localdomain kubelet.sh[2579]: I0116 20:44:11.035335 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:44:11 localhost.localdomain kubelet.sh[2579]: I0116 20:44:11.035363 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:44:11 localhost.localdomain kubelet.sh[2579]: I0116 20:44:11.040733 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" event=&{ID:c3db590e56a311b869092b2d6b1724e5 Type:ContainerStarted Data:f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2} Jan 16 20:44:11 localhost.localdomain kubelet.sh[2579]: I0116 20:44:11.041241 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:44:11 localhost.localdomain kubelet.sh[2579]: I0116 20:44:11.042830 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:44:11 localhost.localdomain kubelet.sh[2579]: I0116 20:44:11.043011 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:44:11 localhost.localdomain kubelet.sh[2579]: I0116 20:44:11.043041 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:44:11 localhost.localdomain kubelet.sh[2579]: I0116 20:44:11.054052 2579 kubelet.go:2529] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 20:44:11 localhost.localdomain bootkube.sh[7556]: Created "0000_10_config-operator_01_featuregate.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/featuregates.config.openshift.io -n Jan 16 20:44:11 localhost.localdomain bootkube.sh[7556]: Created "0000_10_config-operator_01_image.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/images.config.openshift.io -n Jan 16 20:44:11 localhost.localdomain bootkube.sh[7556]: Created "0000_10_config-operator_01_imagecontentpolicy.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/imagecontentpolicies.config.openshift.io -n Jan 16 20:44:12 localhost.localdomain kubelet.sh[2579]: I0116 20:44:12.045189 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:44:12 localhost.localdomain kubelet.sh[2579]: I0116 20:44:12.047738 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:44:12 localhost.localdomain kubelet.sh[2579]: I0116 20:44:12.047785 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:44:12 localhost.localdomain kubelet.sh[2579]: I0116 20:44:12.047812 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:44:12 localhost.localdomain bootkube.sh[7556]: Created "0000_10_config-operator_01_imagecontentsourcepolicy.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/imagecontentsourcepolicies.operator.openshift.io -n Jan 16 20:44:12 localhost.localdomain bootkube.sh[7556]: Created "0000_10_config-operator_01_imagedigestmirrorset.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/imagedigestmirrorsets.config.openshift.io -n Jan 16 20:44:13 localhost.localdomain kubelet.sh[2579]: I0116 20:44:13.054871 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:44:13 localhost.localdomain kubelet.sh[2579]: I0116 20:44:13.066154 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:44:13 localhost.localdomain kubelet.sh[2579]: I0116 20:44:13.066671 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:44:13 localhost.localdomain kubelet.sh[2579]: I0116 20:44:13.066732 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:44:13 localhost.localdomain bootkube.sh[7556]: Created "0000_10_config-operator_01_imagetagmirrorset.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/imagetagmirrorsets.config.openshift.io -n Jan 16 20:44:13 localhost.localdomain bootkube.sh[7556]: Created "0000_10_config-operator_01_infrastructure-Default.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/infrastructures.config.openshift.io -n Jan 16 20:44:13 localhost.localdomain bootkube.sh[7556]: Created "0000_10_config-operator_01_ingress.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/ingresses.config.openshift.io -n Jan 16 20:44:14 localhost.localdomain bootkube.sh[7556]: Created "0000_10_config-operator_01_network.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/networks.config.openshift.io -n Jan 16 20:44:14 localhost.localdomain bootkube.sh[7556]: Created "0000_10_config-operator_01_node.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/nodes.config.openshift.io -n Jan 16 20:44:15 localhost.localdomain bootkube.sh[7556]: Created "0000_10_config-operator_01_oauth.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/oauths.config.openshift.io -n Jan 16 20:44:15 localhost.localdomain bootkube.sh[7556]: Created "0000_10_config-operator_01_project.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/projects.config.openshift.io -n Jan 16 20:44:15 localhost.localdomain bootkube.sh[7556]: Created "0000_10_config-operator_01_scheduler.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/schedulers.config.openshift.io -n Jan 16 20:44:16 localhost.localdomain bootkube.sh[7556]: Created "0000_20_kube-apiserver-operator_00_cr-scc-anyuid.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:anyuid -n Jan 16 20:44:16 localhost.localdomain bootkube.sh[7556]: Created "0000_20_kube-apiserver-operator_00_cr-scc-hostaccess.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:hostaccess -n Jan 16 20:44:16 localhost.localdomain kubelet.sh[2579]: I0116 20:44:16.755582 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:44:16 localhost.localdomain kubelet.sh[2579]: I0116 20:44:16.762314 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:44:16 localhost.localdomain kubelet.sh[2579]: I0116 20:44:16.762573 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:44:16 localhost.localdomain kubelet.sh[2579]: I0116 20:44:16.762632 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:44:17 localhost.localdomain bootkube.sh[7556]: Created "0000_20_kube-apiserver-operator_00_cr-scc-hostmount-anyuid.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:hostmount -n Jan 16 20:44:17 localhost.localdomain bootkube.sh[7556]: Created "0000_20_kube-apiserver-operator_00_cr-scc-hostnetwork-v2.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:hostnetwork-v2 -n Jan 16 20:44:17 localhost.localdomain bootkube.sh[7556]: Created "0000_20_kube-apiserver-operator_00_cr-scc-hostnetwork.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:hostnetwork -n Jan 16 20:44:18 localhost.localdomain bootkube.sh[7556]: Created "0000_20_kube-apiserver-operator_00_cr-scc-nonroot-v2.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:nonroot-v2 -n Jan 16 20:44:18 localhost.localdomain kubelet.sh[2579]: I0116 20:44:18.468275 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:44:18 localhost.localdomain kubelet.sh[2579]: I0116 20:44:18.475553 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:44:18 localhost.localdomain kubelet.sh[2579]: I0116 20:44:18.475771 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:44:18 localhost.localdomain kubelet.sh[2579]: I0116 20:44:18.475831 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:44:18 localhost.localdomain bootkube.sh[7556]: Created "0000_20_kube-apiserver-operator_00_cr-scc-nonroot.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:nonroot -n Jan 16 20:44:19 localhost.localdomain bootkube.sh[7556]: Created "0000_20_kube-apiserver-operator_00_cr-scc-privileged.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:privileged -n Jan 16 20:44:19 localhost.localdomain bootkube.sh[7556]: Created "0000_20_kube-apiserver-operator_00_cr-scc-restricted-v2.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:restricted-v2 -n Jan 16 20:44:19 localhost.localdomain bootkube.sh[7556]: Created "0000_20_kube-apiserver-operator_00_cr-scc-restricted.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:restricted -n Jan 16 20:44:19 localhost.localdomain approve-csr.sh[8881]: No resources found Jan 16 20:44:20 localhost.localdomain bootkube.sh[7556]: Created "0000_20_kube-apiserver-operator_00_crb-systemauthenticated-scc-restricted-v2.yaml" clusterrolebindings.v1.rbac.authorization.k8s.io/system:openshift:scc:restricted-v2 -n Jan 16 20:44:20 localhost.localdomain kubelet.sh[2579]: I0116 20:44:20.611391 2579 kubelet.go:2529] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 20:44:20 localhost.localdomain kubelet.sh[2579]: I0116 20:44:20.611833 2579 kubelet.go:2529] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 20:44:20 localhost.localdomain kubelet.sh[2579]: I0116 20:44:20.614822 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:44:20 localhost.localdomain kubelet.sh[2579]: I0116 20:44:20.626885 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:44:20 localhost.localdomain kubelet.sh[2579]: I0116 20:44:20.627495 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:44:20 localhost.localdomain kubelet.sh[2579]: I0116 20:44:20.627666 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:44:20 localhost.localdomain kubelet.sh[2579]: I0116 20:44:20.647664 2579 kubelet.go:2529] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 20:44:20 localhost.localdomain kubelet.sh[2579]: I0116 20:44:20.650284 2579 kubelet.go:2529] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 20:44:20 localhost.localdomain kubelet.sh[2579]: I0116 20:44:20.651384 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:44:20 localhost.localdomain kubelet.sh[2579]: I0116 20:44:20.656621 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:44:20 localhost.localdomain kubelet.sh[2579]: I0116 20:44:20.658530 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:44:20 localhost.localdomain kubelet.sh[2579]: I0116 20:44:20.659308 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:44:20 localhost.localdomain bootkube.sh[7556]: Created "0000_20_kube-apiserver-operator_00_scc-anyuid.yaml" securitycontextconstraints.v1.security.openshift.io/anyuid -n Jan 16 20:44:21 localhost.localdomain bootkube.sh[7556]: Created "0000_20_kube-apiserver-operator_00_scc-hostaccess.yaml" securitycontextconstraints.v1.security.openshift.io/hostaccess -n Jan 16 20:44:21 localhost.localdomain kubelet.sh[2579]: I0116 20:44:21.133773 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:44:21 localhost.localdomain kubelet.sh[2579]: I0116 20:44:21.143768 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:44:21 localhost.localdomain kubelet.sh[2579]: I0116 20:44:21.143917 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:44:21 localhost.localdomain kubelet.sh[2579]: I0116 20:44:21.144189 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:44:21 localhost.localdomain kubelet.sh[2579]: I0116 20:44:21.172851 2579 kubelet.go:2529] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 20:44:21 localhost.localdomain bootkube.sh[7556]: Created "0000_20_kube-apiserver-operator_00_scc-hostmount-anyuid.yaml" securitycontextconstraints.v1.security.openshift.io/hostmount-anyuid -n Jan 16 20:44:21 localhost.localdomain bootkube.sh[7556]: Created "0000_20_kube-apiserver-operator_00_scc-hostnetwork-v2.yaml" securitycontextconstraints.v1.security.openshift.io/hostnetwork-v2 -n Jan 16 20:44:22 localhost.localdomain kubelet.sh[2579]: I0116 20:44:22.145642 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:44:22 localhost.localdomain kubelet.sh[2579]: I0116 20:44:22.153608 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:44:22 localhost.localdomain kubelet.sh[2579]: I0116 20:44:22.153819 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:44:22 localhost.localdomain kubelet.sh[2579]: I0116 20:44:22.153875 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:44:22 localhost.localdomain bootkube.sh[7556]: Created "0000_20_kube-apiserver-operator_00_scc-hostnetwork.yaml" securitycontextconstraints.v1.security.openshift.io/hostnetwork -n Jan 16 20:44:22 localhost.localdomain master-bmh-update.sh[8903]: error: the server doesn't have a resource type "baremetalhosts" Jan 16 20:44:22 localhost.localdomain master-bmh-update.sh[6528]: Waiting for BareMetalHosts to appear... Jan 16 20:44:22 localhost.localdomain bootkube.sh[7556]: Created "0000_20_kube-apiserver-operator_00_scc-nonroot-v2.yaml" securitycontextconstraints.v1.security.openshift.io/nonroot-v2 -n Jan 16 20:44:23 localhost.localdomain bootkube.sh[7556]: Created "0000_20_kube-apiserver-operator_00_scc-nonroot.yaml" securitycontextconstraints.v1.security.openshift.io/nonroot -n Jan 16 20:44:23 localhost.localdomain bootkube.sh[7556]: Created "0000_20_kube-apiserver-operator_00_scc-privileged.yaml" securitycontextconstraints.v1.security.openshift.io/privileged -n Jan 16 20:44:23 localhost.localdomain bootkube.sh[7556]: Created "0000_20_kube-apiserver-operator_00_scc-restricted-v2.yaml" securitycontextconstraints.v1.security.openshift.io/restricted-v2 -n Jan 16 20:44:24 localhost.localdomain bootkube.sh[7556]: Created "0000_20_kube-apiserver-operator_00_scc-restricted.yaml" securitycontextconstraints.v1.security.openshift.io/restricted -n Jan 16 20:44:24 localhost.localdomain bootkube.sh[7556]: Created "0001_00_cluster-version-operator_03_service.yaml" services.v1./cluster-version-operator -n openshift-cluster-version Jan 16 20:44:25 localhost.localdomain NetworkManager[1706]: [1705437865.0002] policy: set-hostname: set hostname to 'api-int.lab.ocpipi.lan' (from address lookup) Jan 16 20:44:25 localhost.localdomain systemd[1]: Starting Hostname Service... Jan 16 20:44:25 localhost.localdomain bootkube.sh[7556]: Failed to create "00_etcd-endpoints-cm.yaml" configmaps.v1./etcd-endpoints -n openshift-etcd: namespaces "openshift-etcd" not found Jan 16 20:44:25 localhost.localdomain bootkube.sh[7556]: Created "00_namespace-security-allocation-controller-clusterrole.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:controller:namespace-security-allocation-controller -n Jan 16 20:44:25 localhost.localdomain systemd[1]: Started Hostname Service. Jan 16 20:44:25 localhost.localdomain bootkube.sh[7556]: Created "00_namespace-security-allocation-controller-clusterrolebinding.yaml" clusterrolebindings.v1.rbac.authorization.k8s.io/system:openshift:controller:namespace-security-allocation-controller -n Jan 16 20:44:25 api-int.lab.ocpipi.lan systemd-hostnamed[8916]: Hostname set to (transient) Jan 16 20:44:25 api-int.lab.ocpipi.lan systemd[1]: Starting Network Manager Script Dispatcher Service... Jan 16 20:44:26 api-int.lab.ocpipi.lan systemd[1]: Started Network Manager Script Dispatcher Service. Jan 16 20:44:26 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "00_openshift-etcd-ns.yaml" namespaces.v1./openshift-etcd -n Jan 16 20:44:26 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "00_openshift-kube-apiserver-ns.yaml" namespaces.v1./openshift-kube-apiserver -n Jan 16 20:44:26 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:44:26.859672 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:44:26 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:44:26.869832 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:44:26 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:44:26.870316 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:44:26 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:44:26.870378 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:44:27 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "00_openshift-kube-apiserver-operator-ns.yaml" namespaces.v1./openshift-kube-apiserver-operator -n Jan 16 20:44:27 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "00_openshift-kube-controller-manager-ns.yaml" namespaces.v1./openshift-kube-controller-manager -n Jan 16 20:44:27 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "00_openshift-kube-controller-manager-operator-ns.yaml" namespaces.v1./openshift-kube-controller-manager-operator -n Jan 16 20:44:28 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "00_openshift-kube-scheduler-ns.yaml" namespaces.v1./openshift-kube-scheduler -n Jan 16 20:44:28 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "00_podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:controller:privileged-namespaces-psa-label-syncer -n Jan 16 20:44:29 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "00_podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml" clusterrolebindings.v1.rbac.authorization.k8s.io/system:openshift:controller:privileged-namespaces-psa-label-syncer -n Jan 16 20:44:29 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "00_podsecurity-admission-label-syncer-controller-clusterrole.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:controller:podsecurity-admission-label-syncer-controller -n Jan 16 20:44:29 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "00_podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml" clusterrolebindings.v1.rbac.authorization.k8s.io/system:openshift:controller:podsecurity-admission-label-syncer-controller -n Jan 16 20:44:30 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "99_kubeadmin-password-secret.yaml" secrets.v1./kubeadmin -n kube-system Jan 16 20:44:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:44:30.467827 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:44:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:44:30.477794 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:44:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:44:30.479669 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:44:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:44:30.480528 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:44:30 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_host-bmc-secrets-0.yaml" secrets.v1./cp-1-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:44:31 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_host-bmc-secrets-1.yaml" secrets.v1./cp-2-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:44:31 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_host-bmc-secrets-2.yaml" secrets.v1./cp-3-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:44:31 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_host-bmc-secrets-3.yaml" secrets.v1./w-1-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:44:32 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_host-bmc-secrets-4.yaml" secrets.v1./w-2-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:44:32 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_master-machines-0.yaml" machines.v1beta1.machine.openshift.io/lab-wcpsl-master-0 -n openshift-machine-api: the server could not find the requested resource Jan 16 20:44:33 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_master-machines-1.yaml" machines.v1beta1.machine.openshift.io/lab-wcpsl-master-1 -n openshift-machine-api: the server could not find the requested resource Jan 16 20:44:33 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_master-machines-2.yaml" machines.v1beta1.machine.openshift.io/lab-wcpsl-master-2 -n openshift-machine-api: the server could not find the requested resource Jan 16 20:44:33 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_master-user-data-secret.yaml" secrets.v1./master-user-data-managed -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:44:34 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_worker-machineset-0.yaml" machinesets.v1beta1.machine.openshift.io/lab-wcpsl-worker-0 -n openshift-machine-api: the server could not find the requested resource Jan 16 20:44:34 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_worker-user-data-secret.yaml" secrets.v1./worker-user-data-managed -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:44:35 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-machineconfig_99-master-ssh.yaml" machineconfigs.v1.machineconfiguration.openshift.io/99-master-ssh -n : the server could not find the requested resource Jan 16 20:44:35 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-machineconfig_99-worker-ssh.yaml" machineconfigs.v1.machineconfiguration.openshift.io/99-worker-ssh -n : the server could not find the requested resource Jan 16 20:44:35 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "apiserver.openshift.io_apirequestcount.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/apirequestcounts.apiserver.openshift.io -n Jan 16 20:44:36 api-int.lab.ocpipi.lan systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Jan 16 20:44:36 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "cco-cloudcredential_v1_credentialsrequest_crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/credentialsrequests.cloudcredential.openshift.io -n Jan 16 20:44:36 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "cco-cloudcredential_v1_operator_config_custresdef.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/cloudcredentials.operator.openshift.io -n Jan 16 20:44:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:44:36.958811 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:44:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:44:36.966596 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:44:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:44:36.966682 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:44:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:44:36.966762 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:44:37 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "cco-namespace.yaml" namespaces.v1./openshift-cloud-credential-operator -n Jan 16 20:44:39 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "cco-operator-config.yaml" cloudcredentials.v1.operator.openshift.io/cluster -n Jan 16 20:44:39 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "cluster-config.yaml" configmaps.v1./cluster-config-v1 -n kube-system Jan 16 20:44:39 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "cluster-dns-02-config.yml" dnses.v1.config.openshift.io/cluster -n Jan 16 20:44:39 api-int.lab.ocpipi.lan bootkube.sh[7556]: Updated status for "cluster-dns-02-config.yml" dnses.v1.config.openshift.io/cluster -n Jan 16 20:44:39 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "cluster-infrastructure-02-config.yml" infrastructures.v1.config.openshift.io/cluster -n Jan 16 20:44:39 api-int.lab.ocpipi.lan bootkube.sh[7556]: Updated status for "cluster-infrastructure-02-config.yml" infrastructures.v1.config.openshift.io/cluster -n Jan 16 20:44:39 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "cluster-ingress-00-custom-resource-definition.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/ingresscontrollers.operator.openshift.io -n Jan 16 20:44:39 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "cluster-ingress-00-namespace.yaml" namespaces.v1./openshift-ingress-operator -n Jan 16 20:44:40 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "cluster-ingress-02-config.yml" ingresses.v1.config.openshift.io/cluster -n Jan 16 20:44:40 api-int.lab.ocpipi.lan bootkube.sh[7556]: Updated status for "cluster-ingress-02-config.yml" ingresses.v1.config.openshift.io/cluster -n Jan 16 20:44:40 api-int.lab.ocpipi.lan bootkube.sh[7556]: Skipped "cluster-network-01-crd.yml" customresourcedefinitions.v1.apiextensions.k8s.io/networks.config.openshift.io -n as it already exists Jan 16 20:44:40 api-int.lab.ocpipi.lan approve-csr.sh[8985]: No resources found Jan 16 20:44:41 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "cluster-network-02-config.yml" networks.v1.config.openshift.io/cluster -n Jan 16 20:44:41 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "cluster-proxy-01-config.yaml" proxies.v1.config.openshift.io/cluster -n Jan 16 20:44:41 api-int.lab.ocpipi.lan bootkube.sh[7556]: Updated status for "cluster-proxy-01-config.yaml" proxies.v1.config.openshift.io/cluster -n Jan 16 20:44:42 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "cluster-role-binding-kube-apiserver.yaml" clusterrolebindings.v1.rbac.authorization.k8s.io/kube-apiserver -n Jan 16 20:44:42 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "cluster-role-kube-apiserver.yaml" clusterroles.v1.rbac.authorization.k8s.io/kube-apiserver -n Jan 16 20:44:42 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "cluster-scheduler-02-config.yml" schedulers.v1.config.openshift.io/cluster -n Jan 16 20:44:43 api-int.lab.ocpipi.lan bootkube.sh[7556]: Updated status for "cluster-scheduler-02-config.yml" schedulers.v1.config.openshift.io/cluster -n Jan 16 20:44:43 api-int.lab.ocpipi.lan master-bmh-update.sh[9000]: error: the server doesn't have a resource type "baremetalhosts" Jan 16 20:44:43 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: Waiting for BareMetalHosts to appear... Jan 16 20:44:43 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "configmap-admin-kubeconfig-client-ca.yaml" configmaps.v1./admin-kubeconfig-client-ca -n openshift-config: namespaces "openshift-config" not found Jan 16 20:44:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:44:43.734346 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 20:44:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:44:43.735499 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 20:44:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:44:43.735626 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 20:44:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:44:43.735693 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 20:44:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:44:43.736105 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 20:44:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:44:43.736170 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 20:44:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:44:43.736334 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 20:44:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:44:43.736534 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 20:44:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:44:43.736589 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 20:44:43 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "configmap-csr-controller-ca.yaml" configmaps.v1./csr-controller-ca -n openshift-config-managed: namespaces "openshift-config-managed" not found Jan 16 20:44:44 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "configmap-kubelet-bootstrap-kubeconfig-ca.yaml" configmaps.v1./kubelet-bootstrap-kubeconfig -n openshift-config-managed: namespaces "openshift-config-managed" not found Jan 16 20:44:44 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "configmap-sa-token-signing-certs.yaml" configmaps.v1./sa-token-signing-certs -n openshift-config-managed: namespaces "openshift-config-managed" not found Jan 16 20:44:45 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "csr-bootstrap-role-binding.yaml" clusterrolebindings.v1.rbac.authorization.k8s.io/system-bootstrap-node-bootstrapper -n Jan 16 20:44:45 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "cvo-overrides.yaml" clusterversions.v1.config.openshift.io/version -n openshift-cluster-version Jan 16 20:44:45 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "etcd-ca-bundle-configmap.yaml" configmaps.v1./etcd-ca-bundle -n openshift-config: namespaces "openshift-config" not found Jan 16 20:44:46 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "etcd-client-secret.yaml" secrets.v1./etcd-client -n openshift-config: namespaces "openshift-config" not found Jan 16 20:44:46 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "etcd-metric-client-secret.yaml" secrets.v1./etcd-metric-client -n openshift-config: namespaces "openshift-config" not found Jan 16 20:44:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:44:47.053670 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:44:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:44:47.063599 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:44:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:44:47.064642 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:44:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:44:47.065557 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:44:47 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "etcd-metric-serving-ca-configmap.yaml" configmaps.v1./etcd-metric-serving-ca -n openshift-config: namespaces "openshift-config" not found Jan 16 20:44:47 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "etcd-metric-signer-secret.yaml" secrets.v1./etcd-metric-signer -n openshift-config: namespaces "openshift-config" not found Jan 16 20:44:47 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "etcd-serving-ca-configmap.yaml" configmaps.v1./etcd-serving-ca -n openshift-config: namespaces "openshift-config" not found Jan 16 20:44:48 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "etcd-signer-secret.yaml" secrets.v1./etcd-signer -n openshift-config: namespaces "openshift-config" not found Jan 16 20:44:48 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "kube-apiserver-serving-ca-configmap.yaml" configmaps.v1./initial-kube-apiserver-server-ca -n openshift-config: namespaces "openshift-config" not found Jan 16 20:44:49 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "kube-cloud-config.yaml" secrets.v1./kube-cloud-cfg -n kube-system Jan 16 20:44:49 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "kube-system-configmap-root-ca.yaml" configmaps.v1./root-ca -n kube-system Jan 16 20:44:49 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "machine-config-server-tls-secret.yaml" secrets.v1./machine-config-server-tls -n openshift-machine-config-operator: namespaces "openshift-machine-config-operator" not found Jan 16 20:44:50 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "openshift-config-secret-pull-secret.yaml" secrets.v1./pull-secret -n openshift-config: namespaces "openshift-config" not found Jan 16 20:44:50 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "openshift-etcd-svc.yaml" services.v1./etcd -n openshift-etcd Jan 16 20:44:51 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "openshift-install-manifests.yaml" configmaps.v1./openshift-install-manifests -n openshift-config: namespaces "openshift-config" not found Jan 16 20:44:51 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "openshift-install.yaml" configmaps.v1./openshift-install -n openshift-config: namespaces "openshift-config" not found Jan 16 20:44:51 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "secret-aggregator-client-signer.yaml" secrets.v1./aggregator-client-signer -n openshift-kube-apiserver-operator Jan 16 20:44:52 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "secret-bound-sa-token-signing-key.yaml" secrets.v1./next-bound-service-account-signing-key -n openshift-kube-apiserver-operator Jan 16 20:44:52 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "secret-control-plane-client-signer.yaml" secrets.v1./kube-control-plane-signer -n openshift-kube-apiserver-operator Jan 16 20:44:53 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "secret-csr-signer-signer.yaml" secrets.v1./csr-signer-signer -n openshift-kube-controller-manager-operator Jan 16 20:44:53 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "secret-initial-kube-controller-manager-service-account-private-key.yaml" secrets.v1./initial-service-account-private-key -n openshift-config: namespaces "openshift-config" not found Jan 16 20:44:53 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "secret-kube-apiserver-to-kubelet-signer.yaml" secrets.v1./kube-apiserver-to-kubelet-signer -n openshift-kube-apiserver-operator Jan 16 20:44:54 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "secret-loadbalancer-serving-signer.yaml" secrets.v1./loadbalancer-serving-signer -n openshift-kube-apiserver-operator Jan 16 20:44:54 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "secret-localhost-serving-signer.yaml" secrets.v1./localhost-serving-signer -n openshift-kube-apiserver-operator Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "secret-service-network-serving-signer.yaml" secrets.v1./service-network-serving-signer -n openshift-kube-apiserver-operator Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#1] failed to create some manifests: Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "0000_00_cluster-version-operator_01_adminack_configmap.yaml": failed to create configmaps.v1./admin-acks -n openshift-config: namespaces "openshift-config" not found Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "0000_00_cluster-version-operator_01_admingate_configmap.yaml": failed to create configmaps.v1./admin-gates -n openshift-config-managed: namespaces "openshift-config-managed" not found Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "0000_00_cluster-version-operator_03_deployment.yaml": failed to create deployments.v1.apps/cluster-version-operator -n openshift-cluster-version: deployments.apps "cluster-version-operator" is forbidden: quota.openshift.io/ClusterResourceQuota: caches not synchronized Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "00_etcd-endpoints-cm.yaml": failed to create configmaps.v1./etcd-endpoints -n openshift-etcd: namespaces "openshift-etcd" not found Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_feature-gate.yaml": unable to get REST mapping for "99_feature-gate.yaml": no matches for kind "FeatureGate" in version "config.openshift.io/v1" Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_host-bmc-secrets-0.yaml": failed to create secrets.v1./cp-1-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_host-bmc-secrets-1.yaml": failed to create secrets.v1./cp-2-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_host-bmc-secrets-2.yaml": failed to create secrets.v1./cp-3-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_host-bmc-secrets-3.yaml": failed to create secrets.v1./w-1-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_host-bmc-secrets-4.yaml": failed to create secrets.v1./w-2-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_hosts-0.yaml": unable to get REST mapping for "99_openshift-cluster-api_hosts-0.yaml": no matches for kind "BareMetalHost" in version "metal3.io/v1alpha1" Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_hosts-1.yaml": unable to get REST mapping for "99_openshift-cluster-api_hosts-1.yaml": no matches for kind "BareMetalHost" in version "metal3.io/v1alpha1" Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_hosts-2.yaml": unable to get REST mapping for "99_openshift-cluster-api_hosts-2.yaml": no matches for kind "BareMetalHost" in version "metal3.io/v1alpha1" Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_hosts-3.yaml": unable to get REST mapping for "99_openshift-cluster-api_hosts-3.yaml": no matches for kind "BareMetalHost" in version "metal3.io/v1alpha1" Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_hosts-4.yaml": unable to get REST mapping for "99_openshift-cluster-api_hosts-4.yaml": no matches for kind "BareMetalHost" in version "metal3.io/v1alpha1" Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_master-machines-0.yaml": failed to create machines.v1beta1.machine.openshift.io/lab-wcpsl-master-0 -n openshift-machine-api: the server could not find the requested resource Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_master-machines-1.yaml": failed to create machines.v1beta1.machine.openshift.io/lab-wcpsl-master-1 -n openshift-machine-api: the server could not find the requested resource Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_master-machines-2.yaml": failed to create machines.v1beta1.machine.openshift.io/lab-wcpsl-master-2 -n openshift-machine-api: the server could not find the requested resource Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_master-user-data-secret.yaml": failed to create secrets.v1./master-user-data-managed -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_worker-machineset-0.yaml": failed to create machinesets.v1beta1.machine.openshift.io/lab-wcpsl-worker-0 -n openshift-machine-api: the server could not find the requested resource Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_worker-user-data-secret.yaml": failed to create secrets.v1./worker-user-data-managed -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-machineconfig_99-master-ssh.yaml": failed to create machineconfigs.v1.machineconfiguration.openshift.io/99-master-ssh -n : the server could not find the requested resource Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-machineconfig_99-worker-ssh.yaml": failed to create machineconfigs.v1.machineconfiguration.openshift.io/99-worker-ssh -n : the server could not find the requested resource Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "configmap-admin-kubeconfig-client-ca.yaml": failed to create configmaps.v1./admin-kubeconfig-client-ca -n openshift-config: namespaces "openshift-config" not found Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "configmap-csr-controller-ca.yaml": failed to create configmaps.v1./csr-controller-ca -n openshift-config-managed: namespaces "openshift-config-managed" not found Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "configmap-kubelet-bootstrap-kubeconfig-ca.yaml": failed to create configmaps.v1./kubelet-bootstrap-kubeconfig -n openshift-config-managed: namespaces "openshift-config-managed" not found Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "configmap-sa-token-signing-certs.yaml": failed to create configmaps.v1./sa-token-signing-certs -n openshift-config-managed: namespaces "openshift-config-managed" not found Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "etcd-ca-bundle-configmap.yaml": failed to create configmaps.v1./etcd-ca-bundle -n openshift-config: namespaces "openshift-config" not found Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "etcd-client-secret.yaml": failed to create secrets.v1./etcd-client -n openshift-config: namespaces "openshift-config" not found Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "etcd-metric-client-secret.yaml": failed to create secrets.v1./etcd-metric-client -n openshift-config: namespaces "openshift-config" not found Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "etcd-metric-serving-ca-configmap.yaml": failed to create configmaps.v1./etcd-metric-serving-ca -n openshift-config: namespaces "openshift-config" not found Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "etcd-metric-signer-secret.yaml": failed to create secrets.v1./etcd-metric-signer -n openshift-config: namespaces "openshift-config" not found Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "etcd-serving-ca-configmap.yaml": failed to create configmaps.v1./etcd-serving-ca -n openshift-config: namespaces "openshift-config" not found Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "etcd-signer-secret.yaml": failed to create secrets.v1./etcd-signer -n openshift-config: namespaces "openshift-config" not found Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "kube-apiserver-serving-ca-configmap.yaml": failed to create configmaps.v1./initial-kube-apiserver-server-ca -n openshift-config: namespaces "openshift-config" not found Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "machine-config-server-tls-secret.yaml": failed to create secrets.v1./machine-config-server-tls -n openshift-machine-config-operator: namespaces "openshift-machine-config-operator" not found Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "openshift-config-secret-pull-secret.yaml": failed to create secrets.v1./pull-secret -n openshift-config: namespaces "openshift-config" not found Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "openshift-install-manifests.yaml": failed to create configmaps.v1./openshift-install-manifests -n openshift-config: namespaces "openshift-config" not found Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "openshift-install.yaml": failed to create configmaps.v1./openshift-install -n openshift-config: namespaces "openshift-config" not found Jan 16 20:44:55 api-int.lab.ocpipi.lan bootkube.sh[7556]: "secret-initial-kube-controller-manager-service-account-private-key.yaml": failed to create secrets.v1./initial-service-account-private-key -n openshift-config: namespaces "openshift-config" not found Jan 16 20:44:55 api-int.lab.ocpipi.lan systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 16 20:44:56 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "0000_00_cluster-version-operator_01_adminack_configmap.yaml" configmaps.v1./admin-acks -n openshift-config: namespaces "openshift-config" not found Jan 16 20:44:56 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "0000_00_cluster-version-operator_01_admingate_configmap.yaml" configmaps.v1./admin-gates -n openshift-config-managed: namespaces "openshift-config-managed" not found Jan 16 20:44:56 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "0000_00_cluster-version-operator_03_deployment.yaml" deployments.v1.apps/cluster-version-operator -n openshift-cluster-version Jan 16 20:44:56 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "00_etcd-endpoints-cm.yaml" configmaps.v1./etcd-endpoints -n openshift-etcd Jan 16 20:44:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:44:57.191424 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:44:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:44:57.195524 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:44:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:44:57.195738 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:44:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:44:57.195772 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:44:57 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "99_feature-gate.yaml" featuregates.v1.config.openshift.io/cluster -n Jan 16 20:44:57 api-int.lab.ocpipi.lan bootkube.sh[7556]: Updated status for "99_feature-gate.yaml" featuregates.v1.config.openshift.io/cluster -n Jan 16 20:44:57 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_host-bmc-secrets-0.yaml" secrets.v1./cp-1-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:44:58 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_host-bmc-secrets-1.yaml" secrets.v1./cp-2-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:44:58 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_host-bmc-secrets-2.yaml" secrets.v1./cp-3-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:44:58 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_host-bmc-secrets-3.yaml" secrets.v1./w-1-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:44:59 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_host-bmc-secrets-4.yaml" secrets.v1./w-2-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:44:59 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_master-machines-0.yaml" machines.v1beta1.machine.openshift.io/lab-wcpsl-master-0 -n openshift-machine-api: the server could not find the requested resource Jan 16 20:45:00 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_master-machines-1.yaml" machines.v1beta1.machine.openshift.io/lab-wcpsl-master-1 -n openshift-machine-api: the server could not find the requested resource Jan 16 20:45:00 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_master-machines-2.yaml" machines.v1beta1.machine.openshift.io/lab-wcpsl-master-2 -n openshift-machine-api: the server could not find the requested resource Jan 16 20:45:01 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_master-user-data-secret.yaml" secrets.v1./master-user-data-managed -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:01 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_worker-machineset-0.yaml" machinesets.v1beta1.machine.openshift.io/lab-wcpsl-worker-0 -n openshift-machine-api: the server could not find the requested resource Jan 16 20:45:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:01.465561 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:45:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:01.469667 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:45:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:01.469824 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:45:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:01.469858 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:45:01 api-int.lab.ocpipi.lan approve-csr.sh[9072]: No resources found Jan 16 20:45:01 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_worker-user-data-secret.yaml" secrets.v1./worker-user-data-managed -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:02 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-machineconfig_99-master-ssh.yaml" machineconfigs.v1.machineconfiguration.openshift.io/99-master-ssh -n : the server could not find the requested resource Jan 16 20:45:02 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:02.466251 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:45:02 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:02.470713 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:45:02 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:02.470826 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:45:02 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:02.470860 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:45:02 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-machineconfig_99-worker-ssh.yaml" machineconfigs.v1.machineconfiguration.openshift.io/99-worker-ssh -n : the server could not find the requested resource Jan 16 20:45:03 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "configmap-admin-kubeconfig-client-ca.yaml" configmaps.v1./admin-kubeconfig-client-ca -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:03 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "configmap-csr-controller-ca.yaml" configmaps.v1./csr-controller-ca -n openshift-config-managed: namespaces "openshift-config-managed" not found Jan 16 20:45:03 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "configmap-kubelet-bootstrap-kubeconfig-ca.yaml" configmaps.v1./kubelet-bootstrap-kubeconfig -n openshift-config-managed: namespaces "openshift-config-managed" not found Jan 16 20:45:03 api-int.lab.ocpipi.lan master-bmh-update.sh[9088]: No resources found in openshift-machine-api namespace. Jan 16 20:45:04 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:45:04 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "configmap-sa-token-signing-certs.yaml" configmaps.v1./sa-token-signing-certs -n openshift-config-managed: namespaces "openshift-config-managed" not found Jan 16 20:45:04 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "etcd-ca-bundle-configmap.yaml" configmaps.v1./etcd-ca-bundle -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:05 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "etcd-client-secret.yaml" secrets.v1./etcd-client -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:05 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "etcd-metric-client-secret.yaml" secrets.v1./etcd-metric-client -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:05 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "etcd-metric-serving-ca-configmap.yaml" configmaps.v1./etcd-metric-serving-ca -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:06 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "etcd-metric-signer-secret.yaml" secrets.v1./etcd-metric-signer -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:06 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "etcd-serving-ca-configmap.yaml" configmaps.v1./etcd-serving-ca -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:06 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "etcd-signer-secret.yaml" secrets.v1./etcd-signer -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:07.235527 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:45:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:07.240556 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:45:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:07.241023 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:45:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:07.241061 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:45:07 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "kube-apiserver-serving-ca-configmap.yaml" configmaps.v1./initial-kube-apiserver-server-ca -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:07 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "machine-config-server-tls-secret.yaml" secrets.v1./machine-config-server-tls -n openshift-machine-config-operator Jan 16 20:45:08 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "openshift-config-secret-pull-secret.yaml" secrets.v1./pull-secret -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:08 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.0d7lZA.mount: Deactivated successfully. Jan 16 20:45:08 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "openshift-install-manifests.yaml" configmaps.v1./openshift-install-manifests -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:08 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "openshift-install.yaml" configmaps.v1./openshift-install -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "secret-initial-kube-controller-manager-service-account-private-key.yaml" secrets.v1./initial-service-account-private-key -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#2] failed to create some manifests: Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "0000_00_cluster-version-operator_01_adminack_configmap.yaml": failed to create configmaps.v1./admin-acks -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "0000_00_cluster-version-operator_01_admingate_configmap.yaml": failed to create configmaps.v1./admin-gates -n openshift-config-managed: namespaces "openshift-config-managed" not found Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_host-bmc-secrets-0.yaml": failed to create secrets.v1./cp-1-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_host-bmc-secrets-1.yaml": failed to create secrets.v1./cp-2-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_host-bmc-secrets-2.yaml": failed to create secrets.v1./cp-3-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_host-bmc-secrets-3.yaml": failed to create secrets.v1./w-1-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_host-bmc-secrets-4.yaml": failed to create secrets.v1./w-2-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_hosts-0.yaml": unable to get REST mapping for "99_openshift-cluster-api_hosts-0.yaml": no matches for kind "BareMetalHost" in version "metal3.io/v1alpha1" Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_hosts-1.yaml": unable to get REST mapping for "99_openshift-cluster-api_hosts-1.yaml": no matches for kind "BareMetalHost" in version "metal3.io/v1alpha1" Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_hosts-2.yaml": unable to get REST mapping for "99_openshift-cluster-api_hosts-2.yaml": no matches for kind "BareMetalHost" in version "metal3.io/v1alpha1" Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_hosts-3.yaml": unable to get REST mapping for "99_openshift-cluster-api_hosts-3.yaml": no matches for kind "BareMetalHost" in version "metal3.io/v1alpha1" Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_hosts-4.yaml": unable to get REST mapping for "99_openshift-cluster-api_hosts-4.yaml": no matches for kind "BareMetalHost" in version "metal3.io/v1alpha1" Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_master-machines-0.yaml": failed to create machines.v1beta1.machine.openshift.io/lab-wcpsl-master-0 -n openshift-machine-api: the server could not find the requested resource Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_master-machines-1.yaml": failed to create machines.v1beta1.machine.openshift.io/lab-wcpsl-master-1 -n openshift-machine-api: the server could not find the requested resource Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_master-machines-2.yaml": failed to create machines.v1beta1.machine.openshift.io/lab-wcpsl-master-2 -n openshift-machine-api: the server could not find the requested resource Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_master-user-data-secret.yaml": failed to create secrets.v1./master-user-data-managed -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_worker-machineset-0.yaml": failed to create machinesets.v1beta1.machine.openshift.io/lab-wcpsl-worker-0 -n openshift-machine-api: the server could not find the requested resource Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_worker-user-data-secret.yaml": failed to create secrets.v1./worker-user-data-managed -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-machineconfig_99-master-ssh.yaml": failed to create machineconfigs.v1.machineconfiguration.openshift.io/99-master-ssh -n : the server could not find the requested resource Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-machineconfig_99-worker-ssh.yaml": failed to create machineconfigs.v1.machineconfiguration.openshift.io/99-worker-ssh -n : the server could not find the requested resource Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "configmap-admin-kubeconfig-client-ca.yaml": failed to create configmaps.v1./admin-kubeconfig-client-ca -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "configmap-csr-controller-ca.yaml": failed to create configmaps.v1./csr-controller-ca -n openshift-config-managed: namespaces "openshift-config-managed" not found Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "configmap-kubelet-bootstrap-kubeconfig-ca.yaml": failed to create configmaps.v1./kubelet-bootstrap-kubeconfig -n openshift-config-managed: namespaces "openshift-config-managed" not found Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "configmap-sa-token-signing-certs.yaml": failed to create configmaps.v1./sa-token-signing-certs -n openshift-config-managed: namespaces "openshift-config-managed" not found Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "etcd-ca-bundle-configmap.yaml": failed to create configmaps.v1./etcd-ca-bundle -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "etcd-client-secret.yaml": failed to create secrets.v1./etcd-client -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "etcd-metric-client-secret.yaml": failed to create secrets.v1./etcd-metric-client -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "etcd-metric-serving-ca-configmap.yaml": failed to create configmaps.v1./etcd-metric-serving-ca -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "etcd-metric-signer-secret.yaml": failed to create secrets.v1./etcd-metric-signer -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "etcd-serving-ca-configmap.yaml": failed to create configmaps.v1./etcd-serving-ca -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "etcd-signer-secret.yaml": failed to create secrets.v1./etcd-signer -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "kube-apiserver-serving-ca-configmap.yaml": failed to create configmaps.v1./initial-kube-apiserver-server-ca -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "openshift-config-secret-pull-secret.yaml": failed to create secrets.v1./pull-secret -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "openshift-install-manifests.yaml": failed to create configmaps.v1./openshift-install-manifests -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "openshift-install.yaml": failed to create configmaps.v1./openshift-install -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: "secret-initial-kube-controller-manager-service-account-private-key.yaml": failed to create secrets.v1./initial-service-account-private-key -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:09 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "0000_00_cluster-version-operator_01_adminack_configmap.yaml" configmaps.v1./admin-acks -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:10 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "0000_00_cluster-version-operator_01_admingate_configmap.yaml" configmaps.v1./admin-gates -n openshift-config-managed: namespaces "openshift-config-managed" not found Jan 16 20:45:10 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_host-bmc-secrets-0.yaml" secrets.v1./cp-1-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:10 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_host-bmc-secrets-1.yaml" secrets.v1./cp-2-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:11 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_host-bmc-secrets-2.yaml" secrets.v1./cp-3-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:11 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_host-bmc-secrets-3.yaml" secrets.v1./w-1-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:12 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_host-bmc-secrets-4.yaml" secrets.v1./w-2-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:12 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_hosts-0.yaml" baremetalhosts.v1alpha1.metal3.io/cp-1 -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:12 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_hosts-1.yaml" baremetalhosts.v1alpha1.metal3.io/cp-2 -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:13 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_hosts-2.yaml" baremetalhosts.v1alpha1.metal3.io/cp-3 -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:13 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_hosts-3.yaml" baremetalhosts.v1alpha1.metal3.io/w-1 -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:14 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_hosts-4.yaml" baremetalhosts.v1alpha1.metal3.io/w-2 -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:14 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_master-machines-0.yaml" machines.v1beta1.machine.openshift.io/lab-wcpsl-master-0 -n openshift-machine-api: the server could not find the requested resource Jan 16 20:45:14 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_master-machines-1.yaml" machines.v1beta1.machine.openshift.io/lab-wcpsl-master-1 -n openshift-machine-api: the server could not find the requested resource Jan 16 20:45:15 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_master-machines-2.yaml" machines.v1beta1.machine.openshift.io/lab-wcpsl-master-2 -n openshift-machine-api: the server could not find the requested resource Jan 16 20:45:15 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_master-user-data-secret.yaml" secrets.v1./master-user-data-managed -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:16 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_worker-machineset-0.yaml" machinesets.v1beta1.machine.openshift.io/lab-wcpsl-worker-0 -n openshift-machine-api: the server could not find the requested resource Jan 16 20:45:16 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "99_openshift-cluster-api_worker-user-data-secret.yaml" secrets.v1./worker-user-data-managed -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:17.279643 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:45:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:17.284238 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:45:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:17.284474 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:45:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:17.284511 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:45:17 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "99_openshift-machineconfig_99-master-ssh.yaml" machineconfigs.v1.machineconfiguration.openshift.io/99-master-ssh -n Jan 16 20:45:17 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "99_openshift-machineconfig_99-worker-ssh.yaml" machineconfigs.v1.machineconfiguration.openshift.io/99-worker-ssh -n Jan 16 20:45:17 api-int.lab.ocpipi.lan bootkube.sh[7556]: Failed to create "configmap-admin-kubeconfig-client-ca.yaml" configmaps.v1./admin-kubeconfig-client-ca -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:18 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "configmap-csr-controller-ca.yaml" configmaps.v1./csr-controller-ca -n openshift-config-managed Jan 16 20:45:18 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.ZwUtR2.mount: Deactivated successfully. Jan 16 20:45:18 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "configmap-kubelet-bootstrap-kubeconfig-ca.yaml" configmaps.v1./kubelet-bootstrap-kubeconfig -n openshift-config-managed Jan 16 20:45:18 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "configmap-sa-token-signing-certs.yaml" configmaps.v1./sa-token-signing-certs -n openshift-config-managed Jan 16 20:45:19 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "etcd-ca-bundle-configmap.yaml" configmaps.v1./etcd-ca-bundle -n openshift-config Jan 16 20:45:19 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "etcd-client-secret.yaml" secrets.v1./etcd-client -n openshift-config Jan 16 20:45:20 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "etcd-metric-client-secret.yaml" secrets.v1./etcd-metric-client -n openshift-config Jan 16 20:45:20 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "etcd-metric-serving-ca-configmap.yaml" configmaps.v1./etcd-metric-serving-ca -n openshift-config Jan 16 20:45:20 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "etcd-metric-signer-secret.yaml" secrets.v1./etcd-metric-signer -n openshift-config Jan 16 20:45:21 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "etcd-serving-ca-configmap.yaml" configmaps.v1./etcd-serving-ca -n openshift-config Jan 16 20:45:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:21.468605 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:45:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:21.475756 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:45:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:21.476090 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:45:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:21.476148 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:45:21 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "etcd-signer-secret.yaml" secrets.v1./etcd-signer -n openshift-config Jan 16 20:45:22 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "kube-apiserver-serving-ca-configmap.yaml" configmaps.v1./initial-kube-apiserver-server-ca -n openshift-config Jan 16 20:45:22 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "openshift-config-secret-pull-secret.yaml" secrets.v1./pull-secret -n openshift-config Jan 16 20:45:22 api-int.lab.ocpipi.lan approve-csr.sh[9165]: No resources found Jan 16 20:45:22 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "openshift-install-manifests.yaml" configmaps.v1./openshift-install-manifests -n openshift-config Jan 16 20:45:23 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "openshift-install.yaml" configmaps.v1./openshift-install -n openshift-config Jan 16 20:45:23 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "secret-initial-kube-controller-manager-service-account-private-key.yaml" secrets.v1./initial-service-account-private-key -n openshift-config Jan 16 20:45:23 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#3] failed to create some manifests: Jan 16 20:45:23 api-int.lab.ocpipi.lan bootkube.sh[7556]: "0000_00_cluster-version-operator_01_adminack_configmap.yaml": failed to create configmaps.v1./admin-acks -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:23 api-int.lab.ocpipi.lan bootkube.sh[7556]: "0000_00_cluster-version-operator_01_admingate_configmap.yaml": failed to create configmaps.v1./admin-gates -n openshift-config-managed: namespaces "openshift-config-managed" not found Jan 16 20:45:23 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:23 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_host-bmc-secrets-0.yaml": failed to create secrets.v1./cp-1-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:23 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_host-bmc-secrets-1.yaml": failed to create secrets.v1./cp-2-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:23 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_host-bmc-secrets-2.yaml": failed to create secrets.v1./cp-3-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:23 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_host-bmc-secrets-3.yaml": failed to create secrets.v1./w-1-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:23 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_host-bmc-secrets-4.yaml": failed to create secrets.v1./w-2-bmc-secret -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:23 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_hosts-0.yaml": failed to create baremetalhosts.v1alpha1.metal3.io/cp-1 -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:23 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_hosts-1.yaml": failed to create baremetalhosts.v1alpha1.metal3.io/cp-2 -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:23 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_hosts-2.yaml": failed to create baremetalhosts.v1alpha1.metal3.io/cp-3 -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:23 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_hosts-3.yaml": failed to create baremetalhosts.v1alpha1.metal3.io/w-1 -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:23 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_hosts-4.yaml": failed to create baremetalhosts.v1alpha1.metal3.io/w-2 -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:23 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_master-machines-0.yaml": failed to create machines.v1beta1.machine.openshift.io/lab-wcpsl-master-0 -n openshift-machine-api: the server could not find the requested resource Jan 16 20:45:23 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_master-machines-1.yaml": failed to create machines.v1beta1.machine.openshift.io/lab-wcpsl-master-1 -n openshift-machine-api: the server could not find the requested resource Jan 16 20:45:23 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_master-machines-2.yaml": failed to create machines.v1beta1.machine.openshift.io/lab-wcpsl-master-2 -n openshift-machine-api: the server could not find the requested resource Jan 16 20:45:23 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_master-user-data-secret.yaml": failed to create secrets.v1./master-user-data-managed -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:23 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_worker-machineset-0.yaml": failed to create machinesets.v1beta1.machine.openshift.io/lab-wcpsl-worker-0 -n openshift-machine-api: the server could not find the requested resource Jan 16 20:45:23 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_openshift-cluster-api_worker-user-data-secret.yaml": failed to create secrets.v1./worker-user-data-managed -n openshift-machine-api: namespaces "openshift-machine-api" not found Jan 16 20:45:23 api-int.lab.ocpipi.lan bootkube.sh[7556]: "configmap-admin-kubeconfig-client-ca.yaml": failed to create configmaps.v1./admin-kubeconfig-client-ca -n openshift-config: namespaces "openshift-config" not found Jan 16 20:45:24 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:45:24 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "0000_00_cluster-version-operator_01_adminack_configmap.yaml" configmaps.v1./admin-acks -n openshift-config Jan 16 20:45:24 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "0000_00_cluster-version-operator_01_admingate_configmap.yaml" configmaps.v1./admin-gates -n openshift-config-managed Jan 16 20:45:24 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "99_openshift-cluster-api_host-bmc-secrets-0.yaml" secrets.v1./cp-1-bmc-secret -n openshift-machine-api Jan 16 20:45:25 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "99_openshift-cluster-api_host-bmc-secrets-1.yaml" secrets.v1./cp-2-bmc-secret -n openshift-machine-api Jan 16 20:45:25 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "99_openshift-cluster-api_host-bmc-secrets-2.yaml" secrets.v1./cp-3-bmc-secret -n openshift-machine-api Jan 16 20:45:26 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "99_openshift-cluster-api_host-bmc-secrets-3.yaml" secrets.v1./w-1-bmc-secret -n openshift-machine-api Jan 16 20:45:26 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "99_openshift-cluster-api_host-bmc-secrets-4.yaml" secrets.v1./w-2-bmc-secret -n openshift-machine-api Jan 16 20:45:26 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "99_openshift-cluster-api_hosts-0.yaml" baremetalhosts.v1alpha1.metal3.io/cp-1 -n openshift-machine-api Jan 16 20:45:27 api-int.lab.ocpipi.lan bootkube.sh[7556]: Updated status for "99_openshift-cluster-api_hosts-0.yaml" baremetalhosts.v1alpha1.metal3.io/cp-1 -n openshift-machine-api Jan 16 20:45:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:27.342801 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:45:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:27.353257 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:45:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:27.353671 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:45:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:27.353733 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:45:27 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "99_openshift-cluster-api_hosts-1.yaml" baremetalhosts.v1alpha1.metal3.io/cp-2 -n openshift-machine-api Jan 16 20:45:27 api-int.lab.ocpipi.lan bootkube.sh[7556]: Updated status for "99_openshift-cluster-api_hosts-1.yaml" baremetalhosts.v1alpha1.metal3.io/cp-2 -n openshift-machine-api Jan 16 20:45:28 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "99_openshift-cluster-api_hosts-2.yaml" baremetalhosts.v1alpha1.metal3.io/cp-3 -n openshift-machine-api Jan 16 20:45:28 api-int.lab.ocpipi.lan bootkube.sh[7556]: Updated status for "99_openshift-cluster-api_hosts-2.yaml" baremetalhosts.v1alpha1.metal3.io/cp-3 -n openshift-machine-api Jan 16 20:45:28 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "99_openshift-cluster-api_hosts-3.yaml" baremetalhosts.v1alpha1.metal3.io/w-1 -n openshift-machine-api Jan 16 20:45:28 api-int.lab.ocpipi.lan bootkube.sh[7556]: Updated status for "99_openshift-cluster-api_hosts-3.yaml" baremetalhosts.v1alpha1.metal3.io/w-1 -n openshift-machine-api Jan 16 20:45:29 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "99_openshift-cluster-api_hosts-4.yaml" baremetalhosts.v1alpha1.metal3.io/w-2 -n openshift-machine-api Jan 16 20:45:29 api-int.lab.ocpipi.lan bootkube.sh[7556]: Updated status for "99_openshift-cluster-api_hosts-4.yaml" baremetalhosts.v1alpha1.metal3.io/w-2 -n openshift-machine-api Jan 16 20:45:29 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "99_openshift-cluster-api_master-machines-0.yaml" machines.v1beta1.machine.openshift.io/lab-wcpsl-master-0 -n openshift-machine-api Jan 16 20:45:30 api-int.lab.ocpipi.lan bootkube.sh[7556]: Updated status for "99_openshift-cluster-api_master-machines-0.yaml" machines.v1beta1.machine.openshift.io/lab-wcpsl-master-0 -n openshift-machine-api Jan 16 20:45:30 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "99_openshift-cluster-api_master-machines-1.yaml" machines.v1beta1.machine.openshift.io/lab-wcpsl-master-1 -n openshift-machine-api Jan 16 20:45:30 api-int.lab.ocpipi.lan bootkube.sh[7556]: Updated status for "99_openshift-cluster-api_master-machines-1.yaml" machines.v1beta1.machine.openshift.io/lab-wcpsl-master-1 -n openshift-machine-api Jan 16 20:45:31 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "99_openshift-cluster-api_master-machines-2.yaml" machines.v1beta1.machine.openshift.io/lab-wcpsl-master-2 -n openshift-machine-api Jan 16 20:45:31 api-int.lab.ocpipi.lan bootkube.sh[7556]: Updated status for "99_openshift-cluster-api_master-machines-2.yaml" machines.v1beta1.machine.openshift.io/lab-wcpsl-master-2 -n openshift-machine-api Jan 16 20:45:31 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "99_openshift-cluster-api_master-user-data-secret.yaml" secrets.v1./master-user-data-managed -n openshift-machine-api Jan 16 20:45:32 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "99_openshift-cluster-api_worker-machineset-0.yaml" machinesets.v1beta1.machine.openshift.io/lab-wcpsl-worker-0 -n openshift-machine-api Jan 16 20:45:32 api-int.lab.ocpipi.lan bootkube.sh[7556]: Updated status for "99_openshift-cluster-api_worker-machineset-0.yaml" machinesets.v1beta1.machine.openshift.io/lab-wcpsl-worker-0 -n openshift-machine-api Jan 16 20:45:32 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "99_openshift-cluster-api_worker-user-data-secret.yaml" secrets.v1./worker-user-data-managed -n openshift-machine-api Jan 16 20:45:33 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "configmap-admin-kubeconfig-client-ca.yaml" configmaps.v1./admin-kubeconfig-client-ca -n openshift-config Jan 16 20:45:33 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#4] failed to create some manifests: Jan 16 20:45:33 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:33 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#5] failed to create some manifests: Jan 16 20:45:33 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:33 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#6] failed to create some manifests: Jan 16 20:45:33 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:33 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#7] failed to create some manifests: Jan 16 20:45:33 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:33 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#8] failed to create some manifests: Jan 16 20:45:33 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:33 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#9] failed to create some manifests: Jan 16 20:45:33 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:34 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#10] failed to create some manifests: Jan 16 20:45:34 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:34 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#11] failed to create some manifests: Jan 16 20:45:34 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:34 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#12] failed to create some manifests: Jan 16 20:45:34 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:34 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#13] failed to create some manifests: Jan 16 20:45:34 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:34 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#14] failed to create some manifests: Jan 16 20:45:34 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:35 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#15] failed to create some manifests: Jan 16 20:45:35 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:35 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#16] failed to create some manifests: Jan 16 20:45:35 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:35.466601 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:45:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:35.469600 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:45:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:35.472262 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:45:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:35.472538 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:45:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:35.472633 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:45:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:35.474677 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:45:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:35.474851 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:45:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:35.474912 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:45:35 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#17] failed to create some manifests: Jan 16 20:45:35 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:35 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#18] failed to create some manifests: Jan 16 20:45:35 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:35 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#19] failed to create some manifests: Jan 16 20:45:35 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:36 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#20] failed to create some manifests: Jan 16 20:45:36 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:36 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#21] failed to create some manifests: Jan 16 20:45:36 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:36 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#22] failed to create some manifests: Jan 16 20:45:36 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:36 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#23] failed to create some manifests: Jan 16 20:45:36 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:36 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#24] failed to create some manifests: Jan 16 20:45:36 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:37 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#25] failed to create some manifests: Jan 16 20:45:37 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:37 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#26] failed to create some manifests: Jan 16 20:45:37 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:37.480795 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:45:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:37.488167 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:45:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:37.489185 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:45:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:37.489259 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:45:37 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#27] failed to create some manifests: Jan 16 20:45:37 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:37 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#28] failed to create some manifests: Jan 16 20:45:37 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:37 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#29] failed to create some manifests: Jan 16 20:45:37 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:38 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#30] failed to create some manifests: Jan 16 20:45:38 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:38 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#31] failed to create some manifests: Jan 16 20:45:38 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:38 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#32] failed to create some manifests: Jan 16 20:45:38 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:38 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#33] failed to create some manifests: Jan 16 20:45:38 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:38 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#34] failed to create some manifests: Jan 16 20:45:38 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:39 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#35] failed to create some manifests: Jan 16 20:45:39 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:39 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#36] failed to create some manifests: Jan 16 20:45:39 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:39 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#37] failed to create some manifests: Jan 16 20:45:39 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:39 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#38] failed to create some manifests: Jan 16 20:45:39 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:39 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#39] failed to create some manifests: Jan 16 20:45:39 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:40 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#40] failed to create some manifests: Jan 16 20:45:40 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:40 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#41] failed to create some manifests: Jan 16 20:45:40 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:40.466556 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:45:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:40.474471 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:45:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:40.476438 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:45:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:40.476697 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:45:40 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#42] failed to create some manifests: Jan 16 20:45:40 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:40 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#43] failed to create some manifests: Jan 16 20:45:40 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:40 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#44] failed to create some manifests: Jan 16 20:45:40 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:41 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#45] failed to create some manifests: Jan 16 20:45:41 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:41 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#46] failed to create some manifests: Jan 16 20:45:41 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:41.467441 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:45:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:41.469531 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:45:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:41.472805 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:45:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:41.473132 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:45:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:41.473192 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:45:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:41.475629 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:45:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:41.476130 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:45:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:41.476521 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:45:41 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#47] failed to create some manifests: Jan 16 20:45:41 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:41 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#48] failed to create some manifests: Jan 16 20:45:41 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:41 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#49] failed to create some manifests: Jan 16 20:45:41 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:42 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#50] failed to create some manifests: Jan 16 20:45:42 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:42 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#51] failed to create some manifests: Jan 16 20:45:42 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:42 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#52] failed to create some manifests: Jan 16 20:45:42 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:42 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#53] failed to create some manifests: Jan 16 20:45:42 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:42 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#54] failed to create some manifests: Jan 16 20:45:42 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:43 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#55] failed to create some manifests: Jan 16 20:45:43 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:43 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#56] failed to create some manifests: Jan 16 20:45:43 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:43 api-int.lab.ocpipi.lan approve-csr.sh[9244]: No resources found Jan 16 20:45:43 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#57] failed to create some manifests: Jan 16 20:45:43 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:43.737257 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 20:45:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:43.737621 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 20:45:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:43.737697 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 20:45:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:43.737740 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 20:45:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:43.737785 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 20:45:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:43.737818 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 20:45:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:43.737860 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 20:45:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:43.737900 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 20:45:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:43.738070 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 20:45:43 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#58] failed to create some manifests: Jan 16 20:45:43 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:43 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#59] failed to create some manifests: Jan 16 20:45:43 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:44 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#60] failed to create some manifests: Jan 16 20:45:44 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:44 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:45:44 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#61] failed to create some manifests: Jan 16 20:45:44 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:44 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#62] failed to create some manifests: Jan 16 20:45:44 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:44 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#63] failed to create some manifests: Jan 16 20:45:44 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:44 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#64] failed to create some manifests: Jan 16 20:45:44 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:45 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#65] failed to create some manifests: Jan 16 20:45:45 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:45 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#66] failed to create some manifests: Jan 16 20:45:45 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:45 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#67] failed to create some manifests: Jan 16 20:45:45 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:45 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#68] failed to create some manifests: Jan 16 20:45:45 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:45 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#69] failed to create some manifests: Jan 16 20:45:45 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:46 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#70] failed to create some manifests: Jan 16 20:45:46 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:46 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#71] failed to create some manifests: Jan 16 20:45:46 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:46 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#72] failed to create some manifests: Jan 16 20:45:46 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:46 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#73] failed to create some manifests: Jan 16 20:45:46 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:46 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#74] failed to create some manifests: Jan 16 20:45:46 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:47 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#75] failed to create some manifests: Jan 16 20:45:47 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:47 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#76] failed to create some manifests: Jan 16 20:45:47 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:47.465877 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:45:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:47.473758 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:45:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:47.474169 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:45:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:47.474280 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:45:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:47.573148 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:45:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:47.579811 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:45:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:47.580174 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:45:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:47.580230 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:45:47 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#77] failed to create some manifests: Jan 16 20:45:47 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:47 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#78] failed to create some manifests: Jan 16 20:45:47 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:47 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#79] failed to create some manifests: Jan 16 20:45:47 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:48 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#80] failed to create some manifests: Jan 16 20:45:48 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:48 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#81] failed to create some manifests: Jan 16 20:45:48 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:48 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#82] failed to create some manifests: Jan 16 20:45:48 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:48 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#83] failed to create some manifests: Jan 16 20:45:48 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:48 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#84] failed to create some manifests: Jan 16 20:45:48 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:49 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#85] failed to create some manifests: Jan 16 20:45:49 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:49 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#86] failed to create some manifests: Jan 16 20:45:49 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:49 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#87] failed to create some manifests: Jan 16 20:45:49 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:49 api-int.lab.ocpipi.lan bootkube.sh[7556]: [#88] failed to create some manifests: Jan 16 20:45:49 api-int.lab.ocpipi.lan bootkube.sh[7556]: "99_baremetal-provisioning-config.yaml": unable to get REST mapping for "99_baremetal-provisioning-config.yaml": no matches for kind "Provisioning" in version "metal3.io/v1alpha1" Jan 16 20:45:52 api-int.lab.ocpipi.lan bootkube.sh[7556]: Created "99_baremetal-provisioning-config.yaml" provisionings.v1alpha1.metal3.io/provisioning-configuration -n Jan 16 20:45:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:57.700253 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:45:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:57.708270 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:45:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:57.708574 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:45:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:45:57.708627 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:46:04 api-int.lab.ocpipi.lan approve-csr.sh[9324]: No resources found Jan 16 20:46:04 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:46:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:07.466742 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:46:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:07.483292 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:46:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:07.483593 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:46:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:07.483671 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:46:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:07.750854 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:46:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:07.757569 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:46:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:07.758310 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:46:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:07.758889 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:46:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:17.839310 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:46:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:17.847855 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:46:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:17.848329 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:46:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:17.848495 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:46:18 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.q4RZ36.mount: Deactivated successfully. Jan 16 20:46:24 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:46:25 api-int.lab.ocpipi.lan approve-csr.sh[9403]: No resources found Jan 16 20:46:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:25.467651 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:46:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:25.474351 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:46:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:25.474592 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:46:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:25.474648 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:46:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:27.467239 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:46:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:27.474895 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:46:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:27.475367 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:46:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:27.475543 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:46:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:27.916328 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:46:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:27.922312 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:46:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:27.922685 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:46:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:27.922818 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:46:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:37.994763 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:46:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:38.006263 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:46:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:38.007225 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:46:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:38.007299 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:46:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:43.468666 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:46:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:43.475618 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:46:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:43.475839 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:46:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:43.475901 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:46:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:43.738593 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 20:46:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:43.740685 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 20:46:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:43.741289 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 20:46:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:43.741814 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 20:46:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:43.742419 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 20:46:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:43.742908 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 20:46:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:43.743721 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 20:46:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:43.744275 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 20:46:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:43.744771 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 20:46:44 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:46:45 api-int.lab.ocpipi.lan approve-csr.sh[9492]: No resources found Jan 16 20:46:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:46.466692 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:46:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:46.477464 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:46:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:46.477645 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:46:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:46.477687 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:46:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:48.085345 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:46:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:48.092339 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:46:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:48.092460 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:46:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:48.092693 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:46:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:54.467494 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:46:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:54.473775 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:46:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:54.474525 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:46:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:54.474815 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:46:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:58.210340 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:46:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:58.216709 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:46:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:58.217669 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:46:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:58.217740 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:46:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:58.467211 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:46:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:58.474479 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:46:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:58.475308 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:46:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:46:58.476139 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:47:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:00.467543 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:47:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:00.475559 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:47:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:00.475859 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:47:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:00.476085 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:47:04 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:47:06 api-int.lab.ocpipi.lan approve-csr.sh[9570]: No resources found Jan 16 20:47:08 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:08.289293 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:47:08 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:08.295586 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:47:08 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:08.295852 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:47:08 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:08.296823 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:47:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:17.466841 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:47:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:17.473153 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:47:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:17.474129 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:47:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:17.474631 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:47:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:18.373315 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:47:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:18.381225 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:47:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:18.381411 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:47:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:18.381466 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:47:18 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.M4kdgC.mount: Deactivated successfully. Jan 16 20:47:25 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:47:27 api-int.lab.ocpipi.lan approve-csr.sh[9648]: No resources found Jan 16 20:47:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:28.461310 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:47:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:28.473560 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:47:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:28.473911 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:47:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:28.474159 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:47:28 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.S2ch5C.mount: Deactivated successfully. Jan 16 20:47:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:31.468633 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:47:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:31.477164 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:47:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:31.477439 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:47:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:31.477536 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:47:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:35.475252 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:47:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:35.489619 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:47:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:35.490075 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:47:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:35.490143 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:47:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:38.591774 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:47:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:38.597192 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:47:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:38.597380 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:47:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:38.597463 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:47:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:43.747210 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 20:47:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:43.747467 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 20:47:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:43.747530 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 20:47:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:43.747601 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 20:47:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:43.747656 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 20:47:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:43.747766 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 20:47:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:43.747840 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 20:47:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:43.748233 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 20:47:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:43.748289 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 20:47:43 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 20:47:43.822615493Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcc1d762ed74e1eb6027355a2e6cc3933bd7b35cee9d6235de0fbe2d2958b0c2" id=61eed4f6-c568-4b53-ae6b-512e589c5797 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:47:43 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 20:47:43.824345506Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a5beb712367dd5020b5a7b99c2ffbfcd91d3c6c425625d5cc816f58cf145564f,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcc1d762ed74e1eb6027355a2e6cc3933bd7b35cee9d6235de0fbe2d2958b0c2],Size_:448590957,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=61eed4f6-c568-4b53-ae6b-512e589c5797 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:47:45 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:47:48 api-int.lab.ocpipi.lan approve-csr.sh[9730]: No resources found Jan 16 20:47:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:48.677261 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:47:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:48.682579 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:47:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:48.683274 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:47:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:48.683339 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:47:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:49.466443 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:47:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:49.474468 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:47:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:49.474673 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:47:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:49.474729 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:47:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:55.468357 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:47:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:55.477882 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:47:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:55.478256 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:47:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:55.478320 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:47:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:58.467190 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:47:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:58.511469 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:47:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:58.512269 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:47:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:58.512355 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:47:58 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.UES1lD.mount: Deactivated successfully. Jan 16 20:47:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:58.745540 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:47:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:58.750502 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:47:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:58.751412 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:47:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:47:58.751839 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:48:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:04.468803 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:48:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:04.474250 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:48:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:04.474433 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:48:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:04.474491 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:48:05 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:48:08 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:08.821440 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:48:08 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:08.839658 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:48:08 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:08.840443 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:48:08 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:08.840807 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:48:08 api-int.lab.ocpipi.lan approve-csr.sh[9806]: No resources found Jan 16 20:48:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:15.468632 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:48:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:15.476177 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:48:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:15.477204 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:48:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:15.477283 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:48:18 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.dBLy1O.mount: Deactivated successfully. Jan 16 20:48:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:18.975473 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:48:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:18.983907 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:48:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:18.985269 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:48:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:18.985534 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:48:20 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:20.467697 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:48:20 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:20.477245 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:48:20 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:20.478446 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:48:20 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:20.478554 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:48:25 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:48:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:29.059401 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:48:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:29.065763 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:48:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:29.065881 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:48:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:29.066251 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:48:29 api-int.lab.ocpipi.lan approve-csr.sh[9905]: No resources found Jan 16 20:48:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:37.466767 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:48:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:37.473553 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:48:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:37.473850 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:48:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:37.474159 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:48:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:39.165440 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:48:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:39.171311 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:48:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:39.171402 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:48:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:39.171450 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:48:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:42.469377 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:48:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:42.475459 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:48:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:42.476271 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:48:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:42.476495 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:48:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:43.749607 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 20:48:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:43.751225 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 20:48:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:43.751358 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 20:48:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:43.751447 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 20:48:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:43.751540 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 20:48:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:43.751636 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 20:48:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:43.751691 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 20:48:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:43.751786 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 20:48:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:43.751851 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 20:48:45 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:48:48 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.dw4Wl0.mount: Deactivated successfully. Jan 16 20:48:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:49.236892 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:48:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:49.242718 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:48:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:49.243678 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:48:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:49.244246 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:48:50 api-int.lab.ocpipi.lan approve-csr.sh[9984]: No resources found Jan 16 20:48:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:56.467443 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:48:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:56.472521 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:48:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:56.472723 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:48:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:56.472780 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:48:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:57.467500 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:48:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:57.474774 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:48:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:57.475450 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:48:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:57.475519 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:48:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:59.316594 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:48:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:59.322793 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:48:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:59.323707 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:48:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:48:59.324474 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:49:05 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:49:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:06.468691 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:49:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:06.474873 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:49:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:06.475337 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:49:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:06.475399 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:49:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:07.467556 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:49:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:07.474706 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:49:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:07.475783 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:49:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:07.475848 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:49:08 api-int.lab.ocpipi.lan sudo[10044]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps Jan 16 20:49:08 api-int.lab.ocpipi.lan sudo[10044]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:49:08 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.ZSssb6.mount: Deactivated successfully. Jan 16 20:49:08 api-int.lab.ocpipi.lan sudo[10044]: pam_unix(sudo:session): session closed for user root Jan 16 20:49:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:09.366554 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:49:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:09.378918 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:49:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:09.380483 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:49:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:09.380856 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:49:11 api-int.lab.ocpipi.lan approve-csr.sh[10076]: No resources found Jan 16 20:49:17 api-int.lab.ocpipi.lan sudo[10097]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps Jan 16 20:49:17 api-int.lab.ocpipi.lan sudo[10097]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:49:17 api-int.lab.ocpipi.lan sudo[10097]: pam_unix(sudo:session): session closed for user root Jan 16 20:49:18 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.st5lSE.mount: Deactivated successfully. Jan 16 20:49:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:19.479781 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:49:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:19.487381 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:49:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:19.487688 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:49:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:19.487754 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:49:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:25.466608 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:49:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:25.472385 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:49:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:25.472649 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:49:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:25.472708 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:49:26 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:49:26 api-int.lab.ocpipi.lan sudo[10145]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman logs 5cf0 Jan 16 20:49:26 api-int.lab.ocpipi.lan sudo[10145]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:49:26 api-int.lab.ocpipi.lan sudo[10145]: pam_unix(sudo:session): session closed for user root Jan 16 20:49:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:27.467590 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:49:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:27.473384 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:49:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:27.473478 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:49:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:27.473538 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:49:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:28.469883 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:49:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:28.480560 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:49:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:28.480757 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:49:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:28.480820 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:49:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:29.557686 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:49:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:29.564374 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:49:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:29.564684 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:49:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:29.564745 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:49:32 api-int.lab.ocpipi.lan approve-csr.sh[10178]: No resources found Jan 16 20:49:38 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.WdA2xO.mount: Deactivated successfully. Jan 16 20:49:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:39.638244 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:49:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:39.643661 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:49:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:39.644047 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:49:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:39.644109 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:49:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:43.753267 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 20:49:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:43.753636 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 20:49:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:43.753728 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 20:49:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:43.753793 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 20:49:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:43.753854 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 20:49:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:43.753899 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 20:49:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:43.754149 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 20:49:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:43.754202 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 20:49:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:43.754259 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 20:49:46 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:49:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:47.466617 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:49:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:47.473460 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:49:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:47.473727 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:49:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:47.473787 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:49:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:49.466474 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:49:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:49.471873 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:49:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:49.472623 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:49:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:49.472890 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:49:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:49.709721 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:49:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:49.715637 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:49:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:49.715892 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:49:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:49.716546 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:49:52 api-int.lab.ocpipi.lan approve-csr.sh[10263]: No resources found Jan 16 20:49:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:59.800799 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:49:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:59.810549 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:49:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:59.812125 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:49:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:49:59.812193 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:50:06 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:50:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:07.466879 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:50:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:07.472657 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:50:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:07.472844 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:50:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:07.472902 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:50:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:09.878807 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:50:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:09.885806 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:50:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:09.886676 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:50:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:09.887576 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:50:13 api-int.lab.ocpipi.lan approve-csr.sh[10340]: No resources found Jan 16 20:50:18 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.7sncdJ.mount: Deactivated successfully. Jan 16 20:50:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:19.957333 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:50:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:19.966159 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:50:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:19.966364 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:50:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:19.966421 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:50:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:25.467196 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:50:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:25.474783 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:50:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:25.475224 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:50:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:25.475287 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:50:26 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:50:28 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.4CnQIb.mount: Deactivated successfully. Jan 16 20:50:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:30.054900 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:50:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:30.061708 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:50:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:30.062661 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:50:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:30.062833 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:50:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:30.467424 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:50:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:30.472837 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:50:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:30.473648 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:50:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:30.474364 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:50:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:31.466460 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:50:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:31.467192 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:50:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:31.474878 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:50:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:31.475398 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:50:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:31.475460 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:50:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:31.474652 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:50:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:31.475724 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:50:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:31.475780 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:50:34 api-int.lab.ocpipi.lan approve-csr.sh[10418]: No resources found Jan 16 20:50:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:36.467799 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:50:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:36.473855 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:50:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:36.474191 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:50:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:36.474253 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:50:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:40.137246 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:50:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:40.143352 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:50:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:40.143623 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:50:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:40.143688 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:50:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:43.755216 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 20:50:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:43.755440 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 20:50:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:43.755505 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 20:50:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:43.755678 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 20:50:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:43.755745 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 20:50:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:43.755791 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 20:50:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:43.755855 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 20:50:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:43.756359 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 20:50:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:43.756465 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 20:50:46 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:50:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:49.467224 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:50:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:49.474256 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:50:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:49.475271 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:50:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:49.475876 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:50:50 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:50.213366 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:50:50 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:50.219341 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:50:50 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:50.220350 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:50:50 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:50:50.220474 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:50:55 api-int.lab.ocpipi.lan approve-csr.sh[10496]: No resources found Jan 16 20:50:56 api-int.lab.ocpipi.lan systemd[1948]: Started podman-10508.scope. Jan 16 20:51:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:00.294477 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:51:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:00.304203 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:51:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:00.304706 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:51:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:00.304770 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:51:00 api-int.lab.ocpipi.lan sudo[10540]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps Jan 16 20:51:00 api-int.lab.ocpipi.lan sudo[10540]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 20:51:00 api-int.lab.ocpipi.lan sudo[10540]: pam_unix(sudo:session): session closed for user root Jan 16 20:51:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:03.468481 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:51:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:03.481914 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:51:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:03.482294 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:51:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:03.482352 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:51:06 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:51:10 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:10.396578 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:51:10 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:10.408536 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:51:10 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:10.408825 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:51:10 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:10.408891 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:51:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:12.468722 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:51:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:12.475306 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:51:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:12.476749 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:51:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:12.477297 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:51:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:14.466452 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:51:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:14.473265 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:51:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:14.473451 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:51:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:14.475126 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:51:16 api-int.lab.ocpipi.lan approve-csr.sh[10594]: No resources found Jan 16 20:51:18 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.4ukK4T.mount: Deactivated successfully. Jan 16 20:51:20 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:20.499489 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:51:20 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:20.509175 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:51:20 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:20.509749 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:51:20 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:20.511262 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:51:26 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:51:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:30.601858 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:51:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:30.610223 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:51:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:30.610304 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:51:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:30.610369 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:51:36 api-int.lab.ocpipi.lan approve-csr.sh[10676]: No resources found Jan 16 20:51:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:40.755427 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:51:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:40.762278 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:51:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:40.763200 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:51:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:40.763735 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:51:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:42.467090 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:51:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:42.473455 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:51:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:42.474301 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:51:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:42.475181 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:51:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:43.466619 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:51:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:43.473499 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:51:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:43.473739 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:51:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:43.473801 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:51:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:43.757439 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 20:51:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:43.757886 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 20:51:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:43.758300 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 20:51:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:43.758359 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 20:51:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:43.758466 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 20:51:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:43.758529 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 20:51:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:43.758594 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 20:51:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:43.758760 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 20:51:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:43.758840 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 20:51:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:46.467508 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:51:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:46.476876 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:51:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:46.477182 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:51:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:46.477242 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:51:47 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:51:50 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:50.848214 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:51:50 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:50.858777 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:51:50 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:50.859504 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:51:50 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:50.860079 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:51:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:52.467193 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:51:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:52.486369 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:51:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:52.486619 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:51:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:52.486808 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:51:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:57.466898 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:51:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:57.472376 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:51:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:57.472615 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:51:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:51:57.472798 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:51:57 api-int.lab.ocpipi.lan approve-csr.sh[10756]: No resources found Jan 16 20:52:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:00.944839 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:52:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:00.952895 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:52:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:00.953166 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:52:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:00.953220 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:52:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:04.467255 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:52:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:04.472266 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:52:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:04.472357 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:52:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:04.472406 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:52:07 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:52:08 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.uoAWnJ.mount: Deactivated successfully. Jan 16 20:52:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:11.037290 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:52:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:11.044845 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:52:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:11.045221 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:52:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:11.045277 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:52:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:13.479407 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:52:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:13.488652 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:52:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:13.489472 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:52:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:13.490242 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:52:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:14.468329 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:52:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:14.475143 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:52:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:14.475897 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:52:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:14.476599 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:52:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:17.466577 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:52:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:17.474389 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:52:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:17.474601 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:52:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:17.474768 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:52:18 api-int.lab.ocpipi.lan approve-csr.sh[10836]: No resources found Jan 16 20:52:18 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.WV2qa8.mount: Deactivated successfully. Jan 16 20:52:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:21.139499 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:52:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:21.147827 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:52:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:21.148173 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:52:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:21.148229 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:52:27 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:52:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:31.216409 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:52:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:31.221073 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:52:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:31.221121 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:52:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:31.221145 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:52:39 api-int.lab.ocpipi.lan approve-csr.sh[10924]: No resources found Jan 16 20:52:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:41.260372 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:52:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:41.272246 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:52:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:41.272884 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:52:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:41.273500 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:52:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:43.759425 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 20:52:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:43.759880 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 20:52:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:43.760216 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 20:52:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:43.760285 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 20:52:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:43.760353 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 20:52:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:43.760401 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 20:52:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:43.760456 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 20:52:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:43.760513 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 20:52:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:43.760608 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 20:52:43 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 20:52:43.838842891Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcc1d762ed74e1eb6027355a2e6cc3933bd7b35cee9d6235de0fbe2d2958b0c2" id=451e4ebf-6210-4f29-86f4-faec7eb0d276 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:52:43 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 20:52:43.841775818Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a5beb712367dd5020b5a7b99c2ffbfcd91d3c6c425625d5cc816f58cf145564f,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcc1d762ed74e1eb6027355a2e6cc3933bd7b35cee9d6235de0fbe2d2958b0c2],Size_:448590957,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=451e4ebf-6210-4f29-86f4-faec7eb0d276 name=/runtime.v1.ImageService/ImageStatus Jan 16 20:52:47 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:52:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:51.348332 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:52:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:51.354681 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:52:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:51.355469 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:52:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:52:51.355807 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:52:59 api-int.lab.ocpipi.lan approve-csr.sh[11013]: No resources found Jan 16 20:53:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:01.429882 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:53:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:01.437191 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:53:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:01.437479 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:53:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:01.437548 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:53:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:01.467799 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:53:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:01.477392 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:53:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:01.477582 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:53:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:01.477635 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:53:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:03.466387 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:53:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:03.468314 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:53:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:03.472857 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:53:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:03.473551 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:53:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:03.474246 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:53:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:03.475349 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:53:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:03.475615 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:53:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:03.475679 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:53:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:07.469356 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:53:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:07.475453 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:53:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:07.475548 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:53:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:07.475595 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:53:07 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:53:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:11.511335 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:53:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:11.518661 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:53:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:11.519563 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:53:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:11.520419 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:53:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:13.466493 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:53:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:13.473403 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:53:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:13.473607 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:53:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:13.473662 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:53:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:14.467387 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:53:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:14.474272 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:53:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:14.474381 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:53:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:14.474427 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:53:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:16.466891 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:53:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:16.473269 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:53:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:16.473380 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:53:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:16.473802 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:53:18 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.x81jVu.mount: Deactivated successfully. Jan 16 20:53:20 api-int.lab.ocpipi.lan approve-csr.sh[11094]: No resources found Jan 16 20:53:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:21.594388 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:53:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:21.602620 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:53:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:21.602847 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:53:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:21.602912 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:53:27 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:53:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:31.689553 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:53:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:31.698119 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:53:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:31.698849 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:53:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:31.699473 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:53:34 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:34.468227 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:53:34 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:34.474370 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:53:34 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:34.474561 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:53:34 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:34.474618 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:53:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:41.470405 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:53:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:41.477499 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:53:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:41.477684 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:53:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:41.477837 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:53:41 api-int.lab.ocpipi.lan approve-csr.sh[11175]: No resources found Jan 16 20:53:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:41.780639 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:53:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:41.789184 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:53:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:41.789305 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:53:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:41.789354 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:53:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:43.761218 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 20:53:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:43.764180 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 20:53:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:43.764441 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 20:53:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:43.764511 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 20:53:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:43.764619 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 20:53:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:43.764683 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 20:53:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:43.764859 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 20:53:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:43.765111 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 20:53:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:43.765174 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 20:53:48 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:53:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:51.875247 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:53:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:51.881402 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:53:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:51.881611 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:53:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:53:51.881689 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:54:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:01.954182 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:54:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:01.961902 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:54:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:01.962351 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:54:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:01.962413 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:54:02 api-int.lab.ocpipi.lan approve-csr.sh[11254]: No resources found Jan 16 20:54:08 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:54:08 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:08.467468 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:54:08 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:08.472696 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:54:08 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:08.473244 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:54:08 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:08.473306 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:54:10 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:10.471354 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:54:10 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:10.491390 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:54:10 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:10.491594 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:54:10 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:10.491667 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:54:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:12.051509 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:54:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:12.058631 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:54:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:12.058746 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:54:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:12.059076 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:54:18 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.owK7zZ.mount: Deactivated successfully. Jan 16 20:54:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:22.136641 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:54:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:22.144700 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:54:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:22.146040 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:54:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:22.146473 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:54:23 api-int.lab.ocpipi.lan approve-csr.sh[11332]: No resources found Jan 16 20:54:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:27.468210 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:54:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:27.481253 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:54:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:27.482393 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:54:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:27.483510 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:54:28 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:54:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:29.467695 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:54:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:29.468238 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:54:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:29.476107 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:54:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:29.476323 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:54:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:29.476383 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:54:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:29.479302 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:54:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:29.479510 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:54:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:29.479565 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:54:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:32.256329 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:54:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:32.263418 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:54:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:32.264680 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:54:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:32.265507 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:54:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:38.467698 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:54:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:38.474227 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:54:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:38.474520 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:54:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:38.474582 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:54:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:41.467140 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:54:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:41.474394 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:54:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:41.475390 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:54:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:41.475446 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:54:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:42.337452 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:54:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:42.345563 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:54:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:42.346492 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:54:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:42.347509 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:54:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:43.767559 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 20:54:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:43.768169 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 20:54:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:43.768247 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 20:54:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:43.768290 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 20:54:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:43.768355 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 20:54:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:43.768410 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 20:54:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:43.768473 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 20:54:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:43.768520 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 20:54:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:43.768757 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 20:54:44 api-int.lab.ocpipi.lan approve-csr.sh[11412]: No resources found Jan 16 20:54:48 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:54:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:52.421274 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:54:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:52.428652 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:54:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:52.429175 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:54:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:52.429242 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:54:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:53.467356 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:54:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:53.475748 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:54:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:53.476236 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:54:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:54:53.476294 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:54:58 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.J2r8Gs.mount: Deactivated successfully. Jan 16 20:55:02 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:02.504693 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:55:02 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:02.511176 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:55:02 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:02.511472 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:55:02 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:02.511533 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:55:04 api-int.lab.ocpipi.lan approve-csr.sh[11492]: No resources found Jan 16 20:55:08 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:55:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:09.467747 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:55:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:09.474122 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:55:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:09.474449 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:55:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:09.474506 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:55:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:12.466549 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:55:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:12.471227 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:55:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:12.471409 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:55:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:12.471466 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:55:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:12.588768 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:55:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:12.595653 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:55:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:12.596175 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:55:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:12.596236 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:55:18 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.imDTaN.mount: Deactivated successfully. Jan 16 20:55:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:22.674157 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:55:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:22.680373 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:55:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:22.680581 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:55:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:22.680689 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:55:25 api-int.lab.ocpipi.lan approve-csr.sh[11572]: No resources found Jan 16 20:55:28 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.I2l3bI.mount: Deactivated successfully. Jan 16 20:55:28 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:55:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:32.742474 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:55:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:32.750639 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:55:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:32.751218 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:55:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:32.751326 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:55:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:37.466416 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:55:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:37.472251 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:55:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:37.472466 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:55:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:37.472530 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:55:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:42.859746 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:55:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:42.869342 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:55:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:42.870568 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:55:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:42.871326 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:55:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:43.769759 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 20:55:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:43.770294 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 20:55:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:43.770377 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 20:55:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:43.770427 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 20:55:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:43.770479 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 20:55:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:43.770538 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 20:55:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:43.770584 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 20:55:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:43.770639 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 20:55:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:43.770691 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 20:55:46 api-int.lab.ocpipi.lan approve-csr.sh[11654]: No resources found Jan 16 20:55:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:47.467395 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:55:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:47.474751 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:55:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:47.475287 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:55:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:47.475355 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:55:48 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.54UwYQ.mount: Deactivated successfully. Jan 16 20:55:48 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:55:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:49.466204 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:55:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:49.470262 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:55:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:49.470351 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:55:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:49.470404 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:55:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:52.939280 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:55:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:52.945725 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:55:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:52.946108 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:55:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:52.946167 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:55:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:53.466403 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:55:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:53.472528 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:55:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:53.472737 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:55:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:53.472905 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:55:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:55.466886 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:55:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:55.473568 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:55:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:55.473677 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:55:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:55.473725 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:55:58 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.kteQkt.mount: Deactivated successfully. Jan 16 20:55:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:59.467618 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:55:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:59.475712 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:55:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:59.476207 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:55:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:55:59.476269 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:56:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:03.026494 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:56:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:03.034158 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:56:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:03.034392 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:56:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:03.034449 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:56:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:03.467332 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:56:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:03.475530 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:56:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:03.475729 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:56:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:03.475783 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:56:07 api-int.lab.ocpipi.lan approve-csr.sh[11731]: No resources found Jan 16 20:56:08 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.loXpJj.mount: Deactivated successfully. Jan 16 20:56:09 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:56:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:13.103312 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:56:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:13.110236 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:56:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:13.111494 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:56:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:13.112120 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:56:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:16.468576 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:56:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:16.469401 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:56:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:16.484724 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:56:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:16.485172 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:56:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:16.485236 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:56:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:16.489603 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:56:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:16.489733 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:56:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:16.490188 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:56:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:23.206279 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:56:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:23.213467 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:56:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:23.214164 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:56:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:23.214334 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:56:27 api-int.lab.ocpipi.lan approve-csr.sh[11810]: No resources found Jan 16 20:56:29 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:56:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:33.281317 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:56:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:33.288286 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:56:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:33.288733 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:56:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:33.288893 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:56:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:43.353134 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:56:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:43.362596 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:56:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:43.363073 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:56:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:43.363141 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:56:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:43.772380 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 20:56:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:43.773199 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 20:56:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:43.773307 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 20:56:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:43.773361 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 20:56:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:43.773430 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 20:56:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:43.773477 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 20:56:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:43.773531 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 20:56:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:43.773625 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 20:56:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:43.773677 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 20:56:48 api-int.lab.ocpipi.lan approve-csr.sh[11893]: No resources found Jan 16 20:56:49 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:56:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:51.467237 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:56:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:51.473076 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:56:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:51.473259 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:56:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:51.473313 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:56:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:53.437122 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:56:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:53.456225 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:56:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:53.457302 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:56:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:53.457721 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:56:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:59.469420 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:56:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:59.478647 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:56:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:59.479350 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:56:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:56:59.479503 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:57:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:03.575460 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:57:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:03.583430 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:57:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:03.584308 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:57:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:03.584451 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:57:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:04.467049 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:57:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:04.470511 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:57:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:04.471171 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:57:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:04.471254 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:57:09 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:57:09 api-int.lab.ocpipi.lan approve-csr.sh[11992]: No resources found Jan 16 20:57:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:13.688230 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:57:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:13.697292 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:57:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:13.697639 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:57:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:13.697696 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:57:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:16.466719 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:57:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:16.473090 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:57:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:16.473313 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:57:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:16.473377 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:57:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:18.483123 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:57:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:18.500490 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:57:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:18.500710 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:57:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:18.501160 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:57:18 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.IEQBgf.mount: Deactivated successfully. Jan 16 20:57:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:21.467324 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:57:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:21.476275 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:57:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:21.477026 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:57:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:21.477103 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:57:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:23.833292 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:57:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:23.839899 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:57:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:23.840269 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:57:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:23.840352 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:57:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:25.466661 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:57:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:25.473101 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:57:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:25.473420 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:57:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:25.473478 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:57:29 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:57:30 api-int.lab.ocpipi.lan approve-csr.sh[12080]: No resources found Jan 16 20:57:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:30.468084 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:57:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:30.473794 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:57:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:30.474249 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:57:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:30.474332 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:57:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:33.917500 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:57:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:33.932340 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:57:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:33.932769 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:57:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:33.933198 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:57:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:35.467180 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:57:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:35.472414 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:57:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:35.472679 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:57:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:35.472749 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:57:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:43.774830 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 20:57:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:43.775662 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 20:57:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:43.775835 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 20:57:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:43.776314 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 20:57:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:43.776387 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 20:57:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:43.776453 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 20:57:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:43.776498 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 20:57:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:43.776563 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 20:57:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:43.776622 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 20:57:43 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 20:57:43.857301589Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcc1d762ed74e1eb6027355a2e6cc3933bd7b35cee9d6235de0fbe2d2958b0c2" id=763bcea2-9a5c-4a0e-baf9-62ec867af72a name=/runtime.v1.ImageService/ImageStatus Jan 16 20:57:43 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 20:57:43.868517715Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a5beb712367dd5020b5a7b99c2ffbfcd91d3c6c425625d5cc816f58cf145564f,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcc1d762ed74e1eb6027355a2e6cc3933bd7b35cee9d6235de0fbe2d2958b0c2],Size_:448590957,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=763bcea2-9a5c-4a0e-baf9-62ec867af72a name=/runtime.v1.ImageService/ImageStatus Jan 16 20:57:44 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:44.042626 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:57:44 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:44.049667 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:57:44 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:44.050772 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:57:44 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:44.051372 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:57:48 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.rvY7nD.mount: Deactivated successfully. Jan 16 20:57:49 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:57:51 api-int.lab.ocpipi.lan approve-csr.sh[12161]: No resources found Jan 16 20:57:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:54.122479 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:57:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:54.128675 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:57:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:54.129386 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:57:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:57:54.129809 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:57:58 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.29iDxI.mount: Deactivated successfully. Jan 16 20:58:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:04.214369 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:58:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:04.220647 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:58:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:04.221215 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:58:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:04.221281 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:58:08 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.xPE7pt.mount: Deactivated successfully. Jan 16 20:58:10 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:58:11 api-int.lab.ocpipi.lan approve-csr.sh[12239]: No resources found Jan 16 20:58:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:14.292411 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:58:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:14.299890 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:58:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:14.301499 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:58:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:14.301615 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:58:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:18.470770 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:58:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:18.499529 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:58:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:18.499622 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:58:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:18.499673 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:58:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:21.466886 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:58:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:21.473179 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:58:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:21.474371 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:58:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:21.474834 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:58:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:24.411363 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:58:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:24.417306 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:58:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:24.417790 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:58:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:24.417857 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:58:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:27.467177 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:58:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:27.472760 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:58:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:27.473433 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:58:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:27.474157 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:58:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:28.473217 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:58:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:28.493547 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:58:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:28.493731 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:58:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:28.493790 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:58:30 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:58:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:31.468638 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:58:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:31.476713 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:58:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:31.477190 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:58:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:31.477254 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:58:32 api-int.lab.ocpipi.lan approve-csr.sh[12319]: No resources found Jan 16 20:58:34 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:34.492477 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:58:34 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:34.500665 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:58:34 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:34.501314 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:58:34 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:34.501415 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:58:38 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.3wcv2e.mount: Deactivated successfully. Jan 16 20:58:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:42.466687 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:58:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:42.472620 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:58:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:42.473088 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:58:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:42.473264 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:58:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:43.778353 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 20:58:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:43.778850 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 20:58:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:43.779083 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 20:58:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:43.779278 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 20:58:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:43.779353 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 20:58:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:43.779404 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 20:58:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:43.779477 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 20:58:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:43.779531 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 20:58:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:43.779581 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 20:58:44 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:44.466867 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:58:44 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:44.472820 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:58:44 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:44.473113 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:58:44 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:44.473293 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:58:44 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:44.585700 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:58:44 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:44.592536 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:58:44 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:44.593426 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:58:44 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:44.593905 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:58:48 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.r7i7so.mount: Deactivated successfully. Jan 16 20:58:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:49.467671 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:58:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:49.475705 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:58:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:49.476302 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:58:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:49.476365 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:58:50 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:58:53 api-int.lab.ocpipi.lan approve-csr.sh[12397]: No resources found Jan 16 20:58:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:54.678870 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:58:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:54.686169 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:58:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:54.687131 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:58:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:54.687312 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:58:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:55.467521 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:58:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:55.474553 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:58:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:55.474741 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:58:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:58:55.474794 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:58:58 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.3P3uvq.mount: Deactivated successfully. Jan 16 20:59:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:04.759120 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:59:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:04.767608 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:59:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:04.767858 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:59:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:04.768089 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:59:10 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:59:14 api-int.lab.ocpipi.lan approve-csr.sh[12476]: No resources found Jan 16 20:59:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:14.848830 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:59:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:14.857442 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:59:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:14.858150 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:59:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:14.858651 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:59:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:24.927892 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:59:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:24.934512 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:59:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:24.935308 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:59:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:24.935503 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:59:30 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:59:34 api-int.lab.ocpipi.lan approve-csr.sh[12555]: No resources found Jan 16 20:59:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:35.002491 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:59:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:35.009055 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:59:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:35.009141 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:59:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:35.009208 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:59:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:35.466807 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:59:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:35.473224 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:59:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:35.473740 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:59:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:35.474548 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:59:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:39.466321 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:59:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:39.474701 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:59:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:39.474892 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:59:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:39.475355 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:59:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:42.467530 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:59:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:42.475717 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:59:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:42.476621 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:59:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:42.476697 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:59:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:43.466242 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:59:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:43.474353 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:59:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:43.475210 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:59:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:43.475274 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:59:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:43.780851 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 20:59:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:43.781513 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 20:59:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:43.781630 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 20:59:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:43.781695 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 20:59:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:43.781744 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 20:59:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:43.781810 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 20:59:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:43.781860 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 20:59:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:43.782149 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 20:59:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:43.782227 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 20:59:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:45.070695 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:59:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:45.078811 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:59:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:45.079219 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:59:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:45.079277 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:59:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:46.467353 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:59:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:46.478613 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:59:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:46.478799 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:59:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:46.478854 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:59:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:47.467530 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:59:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:47.474696 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:59:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:47.474895 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:59:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:47.475118 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:59:48 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.dqSruC.mount: Deactivated successfully. Jan 16 20:59:50 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 20:59:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:55.155150 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 20:59:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:55.162189 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 20:59:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:55.163513 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 20:59:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 20:59:55.163584 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 20:59:55 api-int.lab.ocpipi.lan approve-csr.sh[12640]: No resources found Jan 16 21:00:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:00.467755 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:00:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:00.473842 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:00:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:00.474195 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:00:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:00.474380 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:00:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:05.233547 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:00:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:05.240647 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:00:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:05.240755 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:00:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:05.240804 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:00:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:09.467376 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:00:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:09.476912 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:00:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:09.477523 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:00:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:09.477594 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:00:10 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:00:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:15.325828 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:00:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:15.339264 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:00:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:15.339534 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:00:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:15.339596 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:00:16 api-int.lab.ocpipi.lan approve-csr.sh[12716]: No resources found Jan 16 21:00:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:16.467169 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:00:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:16.477354 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:00:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:16.477609 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:00:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:16.477668 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:00:18 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.sAIpLh.mount: Deactivated successfully. Jan 16 21:00:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:25.432545 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:00:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:25.441540 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:00:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:25.441820 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:00:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:25.441881 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:00:28 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.NQEnUs.mount: Deactivated successfully. Jan 16 21:00:31 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:00:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:35.554875 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:00:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:35.560314 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:00:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:35.560552 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:00:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:35.560618 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:00:37 api-int.lab.ocpipi.lan approve-csr.sh[12796]: No resources found Jan 16 21:00:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:43.783083 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 21:00:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:43.783337 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 21:00:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:43.783575 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 21:00:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:43.783655 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 21:00:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:43.783730 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 21:00:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:43.783910 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 21:00:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:43.784157 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 21:00:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:43.784206 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 21:00:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:43.784268 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 21:00:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:45.640220 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:00:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:45.646693 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:00:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:45.646906 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:00:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:45.647130 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:00:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:47.469257 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:00:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:47.475110 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:00:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:47.475759 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:00:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:47.476602 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:00:51 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:00:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:52.468259 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:00:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:52.476660 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:00:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:52.477137 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:00:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:52.477204 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:00:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:53.468343 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:00:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:53.476713 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:00:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:53.477271 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:00:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:53.477346 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:00:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:55.708872 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:00:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:55.716372 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:00:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:55.717335 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:00:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:55.717583 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:00:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:57.467717 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:00:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:57.481106 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:00:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:57.481328 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:00:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:57.481386 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:00:57 api-int.lab.ocpipi.lan approve-csr.sh[12875]: No resources found Jan 16 21:00:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:58.468610 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:00:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:58.481837 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:00:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:58.482179 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:00:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:00:58.482236 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:01:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:05.800161 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:01:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:05.808735 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:01:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:05.809395 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:01:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:05.809563 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:01:11 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:01:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:13.468275 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:01:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:13.474700 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:01:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:13.475132 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:01:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:13.475199 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:01:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:15.876731 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:01:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:15.886323 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:01:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:15.887182 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:01:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:15.887768 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:01:18 api-int.lab.ocpipi.lan approve-csr.sh[12951]: No resources found Jan 16 21:01:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:21.467166 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:01:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:21.473816 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:01:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:21.474289 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:01:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:21.474374 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:01:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:25.955322 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:01:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:25.965702 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:01:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:25.966317 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:01:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:25.967235 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:01:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:31.468869 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:01:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:31.475398 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:01:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:31.475633 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:01:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:31.475689 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:01:31 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:01:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:36.079345 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:01:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:36.085697 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:01:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:36.086060 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:01:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:36.086249 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:01:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:36.467354 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:01:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:36.473824 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:01:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:36.475160 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:01:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:36.475342 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:01:39 api-int.lab.ocpipi.lan approve-csr.sh[13049]: No resources found Jan 16 21:01:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:43.784887 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 21:01:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:43.785625 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 21:01:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:43.785739 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 21:01:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:43.785790 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 21:01:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:43.785896 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 21:01:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:43.786229 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 21:01:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:43.786301 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 21:01:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:43.786361 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 21:01:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:43.786408 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 21:01:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:46.152798 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:01:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:46.161832 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:01:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:46.162244 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:01:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:46.162306 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:01:51 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:01:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:56.288777 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:01:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:56.295389 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:01:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:56.295638 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:01:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:01:56.295696 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:02:00 api-int.lab.ocpipi.lan approve-csr.sh[13127]: No resources found Jan 16 21:02:02 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:02.468175 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:02:02 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:02.476733 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:02:02 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:02.477118 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:02:02 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:02.477179 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:02:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:06.368672 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:02:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:06.376199 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:02:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:06.376401 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:02:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:06.376461 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:02:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:06.467434 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:02:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:06.474163 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:02:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:06.474393 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:02:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:06.474457 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:02:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:11.466752 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:02:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:11.475354 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:02:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:11.476283 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:02:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:11.476892 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:02:11 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:02:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:16.442069 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:02:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:16.452175 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:02:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:16.453158 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:02:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:16.453823 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:02:18 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.1NJHuL.mount: Deactivated successfully. Jan 16 21:02:21 api-int.lab.ocpipi.lan approve-csr.sh[13208]: No resources found Jan 16 21:02:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:22.466841 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:02:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:22.467767 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:02:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:22.478119 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:02:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:22.478785 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:02:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:22.479442 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:02:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:22.478623 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:02:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:22.480796 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:02:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:22.481358 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:02:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:24.468473 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:02:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:24.476882 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:02:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:24.478376 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:02:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:24.478893 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:02:26 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:26.466837 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:02:26 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:26.475611 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:02:26 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:26.475797 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:02:26 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:26.475851 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:02:26 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:26.559178 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:02:26 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:26.566297 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:02:26 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:26.566619 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:02:26 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:26.566682 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:02:28 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.LzjU5a.mount: Deactivated successfully. Jan 16 21:02:32 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:02:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:36.467654 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:02:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:36.475384 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:02:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:36.476391 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:02:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:36.477146 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:02:36 api-int.lab.ocpipi.lan sudo[13263]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps Jan 16 21:02:36 api-int.lab.ocpipi.lan sudo[13263]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 21:02:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:36.641823 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:02:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:36.656767 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:02:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:36.656880 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:02:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:36.657096 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:02:37 api-int.lab.ocpipi.lan sudo[13263]: pam_unix(sudo:session): session closed for user root Jan 16 21:02:42 api-int.lab.ocpipi.lan approve-csr.sh[13298]: No resources found Jan 16 21:02:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:43.787854 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 21:02:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:43.788233 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 21:02:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:43.788313 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 21:02:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:43.788366 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 21:02:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:43.788433 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 21:02:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:43.788479 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 21:02:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:43.788647 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 21:02:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:43.788710 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 21:02:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:43.788751 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 21:02:43 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:02:43.886675867Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcc1d762ed74e1eb6027355a2e6cc3933bd7b35cee9d6235de0fbe2d2958b0c2" id=a3eb4e6c-5e90-4ccd-9489-e84515f741fe name=/runtime.v1.ImageService/ImageStatus Jan 16 21:02:43 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:02:43.889119370Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a5beb712367dd5020b5a7b99c2ffbfcd91d3c6c425625d5cc816f58cf145564f,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcc1d762ed74e1eb6027355a2e6cc3933bd7b35cee9d6235de0fbe2d2958b0c2],Size_:448590957,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=a3eb4e6c-5e90-4ccd-9489-e84515f741fe name=/runtime.v1.ImageService/ImageStatus Jan 16 21:02:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:46.753227 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:02:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:46.761083 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:02:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:46.761192 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:02:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:46.761244 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:02:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:47.466408 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:02:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:47.473509 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:02:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:47.473840 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:02:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:47.473897 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:02:52 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:02:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:56.828389 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:02:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:56.843227 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:02:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:56.843613 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:02:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:02:56.843688 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:03:03 api-int.lab.ocpipi.lan approve-csr.sh[13382]: No resources found Jan 16 21:03:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:06.949811 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:03:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:06.958227 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:03:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:06.958410 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:03:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:06.958512 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:03:09 api-int.lab.ocpipi.lan sudo[13421]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps Jan 16 21:03:09 api-int.lab.ocpipi.lan sudo[13421]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 21:03:09 api-int.lab.ocpipi.lan sudo[13421]: pam_unix(sudo:session): session closed for user root Jan 16 21:03:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:11.468273 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:03:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:11.476190 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:03:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:11.476303 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:03:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:11.476793 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:03:12 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:03:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:17.035298 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:03:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:17.045702 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:03:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:17.045896 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:03:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:17.046116 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:03:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:17.466826 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:03:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:17.478078 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:03:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:17.478526 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:03:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:17.479393 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:03:23 api-int.lab.ocpipi.lan approve-csr.sh[13475]: No resources found Jan 16 21:03:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:25.467302 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:03:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:25.474794 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:03:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:25.475343 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:03:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:25.475459 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:03:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:27.127708 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:03:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:27.136441 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:03:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:27.136744 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:03:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:27.136802 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:03:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:32.469099 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:03:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:32.477454 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:03:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:32.477653 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:03:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:32.477880 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:03:32 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:03:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:37.228363 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:03:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:37.237463 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:03:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:37.238345 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:03:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:37.238544 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:03:38 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.2hVsnm.mount: Deactivated successfully. Jan 16 21:03:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:42.467184 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:03:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:42.473480 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:03:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:42.473745 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:03:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:42.473810 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:03:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:43.790421 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 21:03:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:43.790813 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 21:03:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:43.791095 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 21:03:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:43.791182 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 21:03:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:43.791255 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 21:03:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:43.791311 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 21:03:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:43.791365 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 21:03:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:43.791422 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 21:03:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:43.791464 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 21:03:44 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:44.467724 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:03:44 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:44.476547 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:03:44 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:44.477299 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:03:44 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:44.477396 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:03:44 api-int.lab.ocpipi.lan approve-csr.sh[13553]: No resources found Jan 16 21:03:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:47.315230 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:03:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:47.322130 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:03:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:47.322667 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:03:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:47.322733 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:03:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:48.467083 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:03:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:48.476334 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:03:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:48.476519 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:03:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:48.476684 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:03:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:51.466239 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:03:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:51.472418 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:03:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:51.472739 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:03:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:51.472796 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:03:52 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:03:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:57.395458 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:03:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:57.402731 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:03:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:57.403347 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:03:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:57.403403 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:03:57 api-int.lab.ocpipi.lan bootkube.sh[7556]: Error: error while checking pod status: timed out waiting for the condition Jan 16 21:03:57 api-int.lab.ocpipi.lan bootkube.sh[7556]: Tearing down temporary bootstrap control plane... Jan 16 21:03:57 api-int.lab.ocpipi.lan bootkube.sh[7556]: Error: error while checking pod status: timed out waiting for the condition Jan 16 21:03:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:57.431483 2579 kubelet.go:2435] "SyncLoop REMOVE" source="file" pods=[openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain] Jan 16 21:03:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:57.431886 2579 kubelet.go:2435] "SyncLoop REMOVE" source="file" pods=[openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain] Jan 16 21:03:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:57.432788 2579 kuberuntime_container.go:742] "Killing container with a grace period" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" podUID=05c96ce8daffad47cf2b15e2a67753ec containerName="cluster-version-operator" containerID="cri-o://f76c54ce310b345591d6fea03791bc15e056e84296e47c1c755a0852c10f2981" gracePeriod=130 Jan 16 21:03:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:57.433454 2579 kuberuntime_container.go:742] "Killing container with a grace period" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" podUID=a6238b9f1f3a2f2bd2b4b1b0c7962bdd containerName="cloud-credential-operator" containerID="cri-o://0788d090c8866cdce69b0836680b2f097ddf00276c15b5fb80e2d55e2c7e6c87" gracePeriod=30 Jan 16 21:03:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:57.434091 2579 kubelet.go:2435] "SyncLoop REMOVE" source="file" pods=[openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain] Jan 16 21:03:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:57.434208 2579 kubelet.go:2435] "SyncLoop REMOVE" source="file" pods=[kube-system/bootstrap-kube-controller-manager-localhost.localdomain] Jan 16 21:03:57 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:03:57.435173216Z" level=info msg="Stopping container: 0788d090c8866cdce69b0836680b2f097ddf00276c15b5fb80e2d55e2c7e6c87 (timeout: 30s)" id=1a84fbb3-1fec-40dd-b150-88752c9709f2 name=/runtime.v1.RuntimeService/StopContainer Jan 16 21:03:57 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:03:57.436803425Z" level=info msg="Stopping container: f76c54ce310b345591d6fea03791bc15e056e84296e47c1c755a0852c10f2981 (timeout: 130s)" id=c40a1526-db59-46b8-9344-0dd3e769c822 name=/runtime.v1.RuntimeService/StopContainer Jan 16 21:03:57 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:03:57.446355087Z" level=info msg="Stopping container: f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2 (timeout: 30s)" id=c07e4bba-1785-48f8-82d2-3e7aaa2b14e9 name=/runtime.v1.RuntimeService/StopContainer Jan 16 21:03:57 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:03:57.447805587Z" level=info msg="Stopping container: d0175af05ba73d648c2b3062a202d575bed3916b71d96d4a4e25e90ec8b9fcb3 (timeout: 30s)" id=d5c9c311-abc7-4c07-b3cc-80010b0a6451 name=/runtime.v1.RuntimeService/StopContainer Jan 16 21:03:57 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:03:57.452232549Z" level=info msg="Stopping container: 0a595a7350da388b8c61b7e704112d1c886edf09068e421c56d19d38e17f400f (timeout: 30s)" id=961533e1-b084-4c63-9383-f8bd69552959 name=/runtime.v1.RuntimeService/StopContainer Jan 16 21:03:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:57.442790 2579 kuberuntime_container.go:742] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" podUID=c3db590e56a311b869092b2d6b1724e5 containerName="cluster-policy-controller" containerID="cri-o://d0175af05ba73d648c2b3062a202d575bed3916b71d96d4a4e25e90ec8b9fcb3" gracePeriod=30 Jan 16 21:03:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:57.443472 2579 kuberuntime_container.go:742] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" podUID=c3db590e56a311b869092b2d6b1724e5 containerName="kube-controller-manager" containerID="cri-o://f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2" gracePeriod=30 Jan 16 21:03:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:57.448225 2579 kubelet.go:2435] "SyncLoop REMOVE" source="file" pods=[kube-system/bootstrap-kube-scheduler-localhost.localdomain] Jan 16 21:03:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:57.448811 2579 kuberuntime_container.go:742] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" podUID=b8b0f2012ce2b145220be181d7a5aa55 containerName="kube-scheduler" containerID="cri-o://0a595a7350da388b8c61b7e704112d1c886edf09068e421c56d19d38e17f400f" gracePeriod=30 Jan 16 21:03:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:57.475425 2579 kuberuntime_container.go:742] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" podUID=1cb3be1f2df5273e9b77f7050777bcbe containerName="kube-apiserver" containerID="cri-o://ebef89d4391dc8ba547c26d463e7c42c9984ebc5ca069457fbf4d549313cbca5" gracePeriod=135 Jan 16 21:03:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:57.475813 2579 kuberuntime_container.go:742] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" podUID=1cb3be1f2df5273e9b77f7050777bcbe containerName="kube-apiserver-insecure-readyz" containerID="cri-o://3d00b24ede439b8dfa7eb78e218c327ae1bbe9f96719ea8096087e7a0a2f3023" gracePeriod=135 Jan 16 21:03:57 api-int.lab.ocpipi.lan systemd[1]: libpod-23a7dbcb3283acf03eafcf5c8d7e5b76ba821720482533edd6603732aefc2915.scope: Deactivated successfully. Jan 16 21:03:57 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:03:57.478208139Z" level=info msg="Stopping container: 3d00b24ede439b8dfa7eb78e218c327ae1bbe9f96719ea8096087e7a0a2f3023 (timeout: 135s)" id=89a40bf9-8e20-4045-8cae-32685e10b14e name=/runtime.v1.RuntimeService/StopContainer Jan 16 21:03:57 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:03:57.481777807Z" level=info msg="Stopping container: ebef89d4391dc8ba547c26d463e7c42c9984ebc5ca069457fbf4d549313cbca5 (timeout: 135s)" id=8227e1ef-136c-40ae-99aa-eb389dd76882 name=/runtime.v1.RuntimeService/StopContainer Jan 16 21:03:57 api-int.lab.ocpipi.lan systemd[1]: libpod-23a7dbcb3283acf03eafcf5c8d7e5b76ba821720482533edd6603732aefc2915.scope: Consumed 6.086s CPU time. Jan 16 21:03:58 api-int.lab.ocpipi.lan systemd[1]: crio-f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2.scope: Deactivated successfully. Jan 16 21:03:58 api-int.lab.ocpipi.lan systemd[1]: crio-f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2.scope: Consumed 1min 30.515s CPU time. Jan 16 21:03:58 api-int.lab.ocpipi.lan conmon[8783]: conmon f548d3ccb41e4dce8487 : Failed to open cgroups file: /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3db590e56a311b869092b2d6b1724e5.slice/crio-f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2.scope/memory.events Jan 16 21:03:58 api-int.lab.ocpipi.lan systemd[1]: crio-conmon-f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2.scope: Deactivated successfully. Jan 16 21:03:58 api-int.lab.ocpipi.lan systemd[1]: crio-3d00b24ede439b8dfa7eb78e218c327ae1bbe9f96719ea8096087e7a0a2f3023.scope: Deactivated successfully. Jan 16 21:03:58 api-int.lab.ocpipi.lan systemd[1]: crio-d0175af05ba73d648c2b3062a202d575bed3916b71d96d4a4e25e90ec8b9fcb3.scope: Deactivated successfully. Jan 16 21:03:58 api-int.lab.ocpipi.lan systemd[1]: crio-d0175af05ba73d648c2b3062a202d575bed3916b71d96d4a4e25e90ec8b9fcb3.scope: Consumed 41.213s CPU time. Jan 16 21:03:58 api-int.lab.ocpipi.lan systemd[1]: crio-conmon-3d00b24ede439b8dfa7eb78e218c327ae1bbe9f96719ea8096087e7a0a2f3023.scope: Deactivated successfully. Jan 16 21:03:58 api-int.lab.ocpipi.lan systemd[1]: crio-conmon-d0175af05ba73d648c2b3062a202d575bed3916b71d96d4a4e25e90ec8b9fcb3.scope: Deactivated successfully. Jan 16 21:03:58 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-23a7dbcb3283acf03eafcf5c8d7e5b76ba821720482533edd6603732aefc2915-userdata-shm.mount: Deactivated successfully. Jan 16 21:03:58 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-937cca404f9a9ff0dc1a8b08d0f2f10dd42fc0f69083da0fbaa1ae0b5bb26d2d-merged.mount: Deactivated successfully. Jan 16 21:03:58 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-0d297acd6f4195dc31f1cf89adaf2188d947d8efc5fafcf7af370fd9da524dcb-merged.mount: Deactivated successfully. Jan 16 21:03:58 api-int.lab.ocpipi.lan bootkube.sh[3228]: Using /opt/openshift/auth/kubeconfig as KUBECONFIG Jan 16 21:03:58 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:03:58.344515386Z" level=info msg="Stopped container f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/kube-controller-manager" id=c07e4bba-1785-48f8-82d2-3e7aaa2b14e9 name=/runtime.v1.RuntimeService/StopContainer Jan 16 21:03:58 api-int.lab.ocpipi.lan bootkube.sh[3228]: Gathering cluster resources ... Jan 16 21:03:58 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-df3169813fd3f314071b9fa2ffc126bda37b92c4bb1e145b6637109e0d85a4ef-merged.mount: Deactivated successfully. Jan 16 21:03:58 api-int.lab.ocpipi.lan systemd[1]: crio-0a595a7350da388b8c61b7e704112d1c886edf09068e421c56d19d38e17f400f.scope: Deactivated successfully. Jan 16 21:03:58 api-int.lab.ocpipi.lan systemd[1]: crio-0a595a7350da388b8c61b7e704112d1c886edf09068e421c56d19d38e17f400f.scope: Consumed 16.965s CPU time. Jan 16 21:03:58 api-int.lab.ocpipi.lan conmon[8279]: conmon 0a595a7350da388b8c61 : Failed to open cgroups file: /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8b0f2012ce2b145220be181d7a5aa55.slice/crio-0a595a7350da388b8c61b7e704112d1c886edf09068e421c56d19d38e17f400f.scope/memory.events Jan 16 21:03:58 api-int.lab.ocpipi.lan systemd[1]: crio-conmon-0a595a7350da388b8c61b7e704112d1c886edf09068e421c56d19d38e17f400f.scope: Deactivated successfully. Jan 16 21:03:58 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:03:58.485295762Z" level=info msg="Stopped container 3d00b24ede439b8dfa7eb78e218c327ae1bbe9f96719ea8096087e7a0a2f3023: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/kube-apiserver-insecure-readyz" id=89a40bf9-8e20-4045-8cae-32685e10b14e name=/runtime.v1.RuntimeService/StopContainer Jan 16 21:03:58 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-70778a1fda874c2daf95b0c0477d9068d2a358f3eecc1550ba05956e81d4740f-merged.mount: Deactivated successfully. Jan 16 21:03:58 api-int.lab.ocpipi.lan sudo[13727]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get nodes -o jsonpath -l node-role.kubernetes.io/master --template {range .items[*]}{.metadata.name}{"\n"}{end} Jan 16 21:03:58 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:03:58.581446141Z" level=info msg="Stopped container d0175af05ba73d648c2b3062a202d575bed3916b71d96d4a4e25e90ec8b9fcb3: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/cluster-policy-controller" id=d5c9c311-abc7-4c07-b3cc-80010b0a6451 name=/runtime.v1.RuntimeService/StopContainer Jan 16 21:03:58 api-int.lab.ocpipi.lan systemd[1]: Created slice User Slice of UID 0. Jan 16 21:03:58 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:03:58.630246771Z" level=info msg="Stopping pod sandbox: 79c10015fd162b8e62ecb33ebeccbd5e476b9a518fb7eb7c00b519d5bb0eb934" id=b573b0db-7792-4c46-8762-88cc4b30f5df name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:03:58 api-int.lab.ocpipi.lan systemd[1]: Starting User Runtime Directory /run/user/0... Jan 16 21:03:58 api-int.lab.ocpipi.lan sudo[13731]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get pods --all-namespaces --template {{ range .items }}{{ $name := .metadata.name }}{{ $ns := .metadata.namespace }}{{ range .spec.containers }}-n {{ $ns }} {{ $name }} -c {{ .name }}{{ "\n" }}{{ end }}{{ range .spec.initContainers }}-n {{ $ns }} {{ $name }} -c {{ .name }}{{ "\n" }}{{ end }}{{ end }} Jan 16 21:03:58 api-int.lab.ocpipi.lan systemd[1]: Finished User Runtime Directory /run/user/0. Jan 16 21:03:58 api-int.lab.ocpipi.lan systemd[1]: Starting User Manager for UID 0... Jan 16 21:03:58 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:03:58.678319186Z" level=info msg="Stopped pod sandbox: 79c10015fd162b8e62ecb33ebeccbd5e476b9a518fb7eb7c00b519d5bb0eb934" id=b573b0db-7792-4c46-8762-88cc4b30f5df name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:03:58 api-int.lab.ocpipi.lan systemd[1]: crio-0788d090c8866cdce69b0836680b2f097ddf00276c15b5fb80e2d55e2c7e6c87.scope: Deactivated successfully. Jan 16 21:03:58 api-int.lab.ocpipi.lan systemd[1]: crio-0788d090c8866cdce69b0836680b2f097ddf00276c15b5fb80e2d55e2c7e6c87.scope: Consumed 6.895s CPU time. Jan 16 21:03:58 api-int.lab.ocpipi.lan sudo[13723]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get nodes -o jsonpath --template {range .items[*]}{.metadata.name}{"\n"}{end} Jan 16 21:03:58 api-int.lab.ocpipi.lan systemd[1]: crio-conmon-0788d090c8866cdce69b0836680b2f097ddf00276c15b5fb80e2d55e2c7e6c87.scope: Deactivated successfully. Jan 16 21:03:58 api-int.lab.ocpipi.lan systemd[13772]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:03:58 api-int.lab.ocpipi.lan systemd[1]: crio-f76c54ce310b345591d6fea03791bc15e056e84296e47c1c755a0852c10f2981.scope: Deactivated successfully. Jan 16 21:03:58 api-int.lab.ocpipi.lan systemd[1]: crio-f76c54ce310b345591d6fea03791bc15e056e84296e47c1c755a0852c10f2981.scope: Consumed 1min 8.109s CPU time. Jan 16 21:03:58 api-int.lab.ocpipi.lan systemd[1]: crio-conmon-f76c54ce310b345591d6fea03791bc15e056e84296e47c1c755a0852c10f2981.scope: Deactivated successfully. Jan 16 21:03:58 api-int.lab.ocpipi.lan systemd[1]: crio-conmon-f76c54ce310b345591d6fea03791bc15e056e84296e47c1c755a0852c10f2981.scope: Consumed 3.590s CPU time. Jan 16 21:03:58 api-int.lab.ocpipi.lan sudo[13737]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get pods -l apiserver=true --all-namespaces --template {{ range .items }}-n {{ .metadata.namespace }} {{ .metadata.name }}{{ "\n" }}{{ end }} Jan 16 21:03:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:58.892170 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-config\") pod \"c3db590e56a311b869092b2d6b1724e5\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " Jan 16 21:03:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:58.892478 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-ssl-certs-host\") pod \"c3db590e56a311b869092b2d6b1724e5\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " Jan 16 21:03:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:58.892521 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-logs\") pod \"c3db590e56a311b869092b2d6b1724e5\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " Jan 16 21:03:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:58.892617 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-etc-kubernetes-cloud\") pod \"c3db590e56a311b869092b2d6b1724e5\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " Jan 16 21:03:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:58.892668 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-secrets\") pod \"c3db590e56a311b869092b2d6b1724e5\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " Jan 16 21:03:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:58.892816 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-secrets" (OuterVolumeSpecName: "secrets") pod "c3db590e56a311b869092b2d6b1724e5" (UID: "c3db590e56a311b869092b2d6b1724e5"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:03:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:58.894752 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-logs" (OuterVolumeSpecName: "logs") pod "c3db590e56a311b869092b2d6b1724e5" (UID: "c3db590e56a311b869092b2d6b1724e5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:03:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:58.894809 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "c3db590e56a311b869092b2d6b1724e5" (UID: "c3db590e56a311b869092b2d6b1724e5"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:03:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:58.894859 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "c3db590e56a311b869092b2d6b1724e5" (UID: "c3db590e56a311b869092b2d6b1724e5"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:03:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:58.892995 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-config" (OuterVolumeSpecName: "config") pod "c3db590e56a311b869092b2d6b1724e5" (UID: "c3db590e56a311b869092b2d6b1724e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:03:58 api-int.lab.ocpipi.lan sudo[13775]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get clusterversion -o json Jan 16 21:03:58 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:03:58.950359570Z" level=info msg="Stopped container 0a595a7350da388b8c61b7e704112d1c886edf09068e421c56d19d38e17f400f: kube-system/bootstrap-kube-scheduler-localhost.localdomain/kube-scheduler" id=961533e1-b084-4c63-9383-f8bd69552959 name=/runtime.v1.RuntimeService/StopContainer Jan 16 21:03:58 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:03:58.986305862Z" level=info msg="Stopping pod sandbox: 8ef4b7210274a6b52b1f275b2b88575b44667f9376ae93b8eea1a279639e87b6" id=d0dfeba3-6090-4e10-93b8-fe6b4c3610e8 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:03:58 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:03:58.995385896Z" level=info msg="Stopped pod sandbox: 8ef4b7210274a6b52b1f275b2b88575b44667f9376ae93b8eea1a279639e87b6" id=d0dfeba3-6090-4e10-93b8-fe6b4c3610e8 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:58.995356 2579 reconciler_common.go:300] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-etc-kubernetes-cloud\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:58.995403 2579 reconciler_common.go:300] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-ssl-certs-host\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:58.995431 2579 reconciler_common.go:300] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-logs\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:58.995461 2579 reconciler_common.go:300] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-secrets\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:58.995486 2579 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-config\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:03:59 api-int.lab.ocpipi.lan sudo[13750]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get apiservices -o json Jan 16 21:03:59 api-int.lab.ocpipi.lan sudo[13781]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get configmaps --all-namespaces -o json Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:59.043817 2579 generic.go:334] "Generic (PLEG): container finished" podID=b8b0f2012ce2b145220be181d7a5aa55 containerID="0a595a7350da388b8c61b7e704112d1c886edf09068e421c56d19d38e17f400f" exitCode=0 Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:59.044125 2579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ef4b7210274a6b52b1f275b2b88575b44667f9376ae93b8eea1a279639e87b6" Jan 16 21:03:59 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:03:59.067211654Z" level=info msg="Stopped container 0788d090c8866cdce69b0836680b2f097ddf00276c15b5fb80e2d55e2c7e6c87: openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain/cloud-credential-operator" id=1a84fbb3-1fec-40dd-b150-88752c9709f2 name=/runtime.v1.RuntimeService/StopContainer Jan 16 21:03:59 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:03:59.077169911Z" level=info msg="Stopping pod sandbox: 26024c8016ef3e2119dd507f560533c94af57eb36863fae575a12ac36b7c6b00" id=b053e860-c593-4aa7-b34d-600cfaaa5285 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:59.078369 2579 generic.go:334] "Generic (PLEG): container finished" podID=a6238b9f1f3a2f2bd2b4b1b0c7962bdd containerID="0788d090c8866cdce69b0836680b2f097ddf00276c15b5fb80e2d55e2c7e6c87" exitCode=0 Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:59.089244 2579 generic.go:334] "Generic (PLEG): container finished" podID=1cb3be1f2df5273e9b77f7050777bcbe containerID="3d00b24ede439b8dfa7eb78e218c327ae1bbe9f96719ea8096087e7a0a2f3023" exitCode=0 Jan 16 21:03:59 api-int.lab.ocpipi.lan sudo[13757]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get clusteroperators -o json Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:59.102665 2579 generic.go:334] "Generic (PLEG): container finished" podID=c3db590e56a311b869092b2d6b1724e5 containerID="f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2" exitCode=0 Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:59.102714 2579 generic.go:334] "Generic (PLEG): container finished" podID=c3db590e56a311b869092b2d6b1724e5 containerID="d0175af05ba73d648c2b3062a202d575bed3916b71d96d4a4e25e90ec8b9fcb3" exitCode=0 Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:59.102849 2579 scope.go:115] "RemoveContainer" containerID="f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2" Jan 16 21:03:59 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:03:59.122700345Z" level=info msg="Removing container: f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2" id=ce8fa972-156b-4280-8930-a705e8b57151 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: Removed slice libcontainer container kubepods-burstable-podc3db590e56a311b869092b2d6b1724e5.slice. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: kubepods-burstable-podc3db590e56a311b869092b2d6b1724e5.slice: Consumed 2min 14.302s CPU time. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[13772]: Queued start job for default target Main User Target. Jan 16 21:03:59 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:03:59.154822876Z" level=info msg="Stopped pod sandbox: 26024c8016ef3e2119dd507f560533c94af57eb36863fae575a12ac36b7c6b00" id=b053e860-c593-4aa7-b34d-600cfaaa5285 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[13772]: Created slice User Application Slice. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[13772]: Started Daily Cleanup of User's Temporary Directories. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[13772]: Reached target Paths. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[13772]: Reached target Timers. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[13772]: Starting D-Bus User Message Bus Socket... Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[13772]: Starting Create User's Volatile Files and Directories... Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[13772]: Listening on D-Bus User Message Bus Socket. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[13772]: Reached target Sockets. Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:59.206265 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-logs\") pod \"b8b0f2012ce2b145220be181d7a5aa55\" (UID: \"b8b0f2012ce2b145220be181d7a5aa55\") " Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:59.206377 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-secrets\") pod \"b8b0f2012ce2b145220be181d7a5aa55\" (UID: \"b8b0f2012ce2b145220be181d7a5aa55\") " Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:59.206524 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-secrets" (OuterVolumeSpecName: "secrets") pod "b8b0f2012ce2b145220be181d7a5aa55" (UID: "b8b0f2012ce2b145220be181d7a5aa55"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:59.206548 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-logs" (OuterVolumeSpecName: "logs") pod "b8b0f2012ce2b145220be181d7a5aa55" (UID: "b8b0f2012ce2b145220be181d7a5aa55"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:03:59 api-int.lab.ocpipi.lan sudo[13824]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get csr -o json Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[13772]: Finished Create User's Volatile Files and Directories. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[13772]: Reached target Basic System. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[13772]: Reached target Main User Target. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[13772]: Startup finished in 433ms. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: Started User Manager for UID 0. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: Started Session c1 of User root. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: Started Session c2 of User root. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: Started Session c3 of User root. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: Started Session c4 of User root. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: Started Session c5 of User root. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: Started Session c6 of User root. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: Started Session c7 of User root. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: Started Session c8 of User root. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-a6cfc3c1af7ed212c99fa3e0605ea483d4bb394224aa158bd5c8e6bf3c0a1877-merged.mount: Deactivated successfully. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-5eaa66d8e9fb20f80629cc4fdce011360d4de34c4f0d7a6d67c83aa489615c09-merged.mount: Deactivated successfully. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: run-netns-fd52f44c\x2da8d0\x2d40a9\x2d89b1\x2d3d0685e1a228.mount: Deactivated successfully. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: run-ipcns-fd52f44c\x2da8d0\x2d40a9\x2d89b1\x2d3d0685e1a228.mount: Deactivated successfully. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: run-utsns-fd52f44c\x2da8d0\x2d40a9\x2d89b1\x2d3d0685e1a228.mount: Deactivated successfully. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: run-containers-storage-overlay\x2dcontainers-8ef4b7210274a6b52b1f275b2b88575b44667f9376ae93b8eea1a279639e87b6-userdata-shm.mount: Deactivated successfully. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-a8aa81f5df8ffd442c31e8a180fda2c3264b23be048aa5ce32cb4a48928cbc45-merged.mount: Deactivated successfully. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-3b6e9962b450741f0fd7239dcba6503c42d158b49bfa3ec4032216e1cedd6662-merged.mount: Deactivated successfully. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: run-netns-dfc52cbc\x2def5b\x2d4806\x2d9358\x2dcc82cb68b717.mount: Deactivated successfully. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: run-ipcns-dfc52cbc\x2def5b\x2d4806\x2d9358\x2dcc82cb68b717.mount: Deactivated successfully. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: run-utsns-dfc52cbc\x2def5b\x2d4806\x2d9358\x2dcc82cb68b717.mount: Deactivated successfully. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: run-containers-storage-overlay\x2dcontainers-79c10015fd162b8e62ecb33ebeccbd5e476b9a518fb7eb7c00b519d5bb0eb934-userdata-shm.mount: Deactivated successfully. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-e54f3709366e6cd9a0af044574251616e8490ecb7fadfbf815e705aee33b3959-merged.mount: Deactivated successfully. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: run-netns-793ab5d4\x2d5c87\x2d496f\x2db19e\x2d641ff612b8a1.mount: Deactivated successfully. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: run-ipcns-793ab5d4\x2d5c87\x2d496f\x2db19e\x2d641ff612b8a1.mount: Deactivated successfully. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: run-utsns-793ab5d4\x2d5c87\x2d496f\x2db19e\x2d641ff612b8a1.mount: Deactivated successfully. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: run-containers-storage-overlay\x2dcontainers-26024c8016ef3e2119dd507f560533c94af57eb36863fae575a12ac36b7c6b00-userdata-shm.mount: Deactivated successfully. Jan 16 21:03:59 api-int.lab.ocpipi.lan sudo[13727]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:03:59 api-int.lab.ocpipi.lan sudo[13731]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:03:59 api-int.lab.ocpipi.lan sudo[13723]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:59.309291 2579 reconciler_common.go:300] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-logs\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:59.309377 2579 reconciler_common.go:300] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-secrets\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:03:59 api-int.lab.ocpipi.lan sudo[13737]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-52d35c84aadae533e3bc66c7b6e6cf1cd8cbcbc9c3f5fecad678fa595c03a152-merged.mount: Deactivated successfully. Jan 16 21:03:59 api-int.lab.ocpipi.lan sudo[13775]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:03:59 api-int.lab.ocpipi.lan sudo[13781]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:03:59 api-int.lab.ocpipi.lan sudo[13750]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:03:59 api-int.lab.ocpipi.lan sudo[13757]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: Started Session c9 of User root. Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:59.413500 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6238b9f1f3a2f2bd2b4b1b0c7962bdd-secrets" (OuterVolumeSpecName: "secrets") pod "a6238b9f1f3a2f2bd2b4b1b0c7962bdd" (UID: "a6238b9f1f3a2f2bd2b4b1b0c7962bdd"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:59.413643 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a6238b9f1f3a2f2bd2b4b1b0c7962bdd-secrets\") pod \"a6238b9f1f3a2f2bd2b4b1b0c7962bdd\" (UID: \"a6238b9f1f3a2f2bd2b4b1b0c7962bdd\") " Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:59.413755 2579 reconciler_common.go:300] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a6238b9f1f3a2f2bd2b4b1b0c7962bdd-secrets\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:03:59 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:03:59.430320941Z" level=info msg="Stopped container f76c54ce310b345591d6fea03791bc15e056e84296e47c1c755a0852c10f2981: openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain/cluster-version-operator" id=c40a1526-db59-46b8-9344-0dd3e769c822 name=/runtime.v1.RuntimeService/StopContainer Jan 16 21:03:59 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:03:59.440540965Z" level=info msg="Stopping pod sandbox: 70686be8a2d87683a00828f4233d059638689db262cbef7d341c1f46aeb3fd09" id=37366a07-9180-4894-9804-74745046786a name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:03:59 api-int.lab.ocpipi.lan sudo[13824]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:03:59 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:03:59.470062071Z" level=info msg="Removed container f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/kube-controller-manager" id=ce8fa972-156b-4280-8930-a705e8b57151 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-b7bac75e38dc95ca0e6c5c40904a87764d72c57ab95a9937f352d217fa26aed2-merged.mount: Deactivated successfully. Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:59.476304 2579 scope.go:115] "RemoveContainer" containerID="d0175af05ba73d648c2b3062a202d575bed3916b71d96d4a4e25e90ec8b9fcb3" Jan 16 21:03:59 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:03:59.493100961Z" level=info msg="Stopped pod sandbox: 70686be8a2d87683a00828f4233d059638689db262cbef7d341c1f46aeb3fd09" id=37366a07-9180-4894-9804-74745046786a name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: run-netns-ec310a8f\x2da075\x2d48ab\x2db45f\x2d36f204cc9550.mount: Deactivated successfully. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: run-ipcns-ec310a8f\x2da075\x2d48ab\x2db45f\x2d36f204cc9550.mount: Deactivated successfully. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: run-utsns-ec310a8f\x2da075\x2d48ab\x2db45f\x2d36f204cc9550.mount: Deactivated successfully. Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: run-containers-storage-overlay\x2dcontainers-70686be8a2d87683a00828f4233d059638689db262cbef7d341c1f46aeb3fd09-userdata-shm.mount: Deactivated successfully. Jan 16 21:03:59 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:03:59.522369867Z" level=info msg="Removing container: d0175af05ba73d648c2b3062a202d575bed3916b71d96d4a4e25e90ec8b9fcb3" id=0c43879f-fe72-48de-ba76-12eac84e0af7 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:03:59 api-int.lab.ocpipi.lan sudo[13871]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get kubeapiserver -o json Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: Started Session c10 of User root. Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:59.583173 2579 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_bootstrap-cluster-version-operator-localhost.localdomain_05c96ce8daffad47cf2b15e2a67753ec/cluster-version-operator/1.log" Jan 16 21:03:59 api-int.lab.ocpipi.lan sudo[13871]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:03:59 api-int.lab.ocpipi.lan sudo[13844]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get endpoints --all-namespaces -o json Jan 16 21:03:59 api-int.lab.ocpipi.lan sudo[13859]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get events --all-namespaces -o json Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: Started Session c11 of User root. Jan 16 21:03:59 api-int.lab.ocpipi.lan sudo[13844]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: Started Session c12 of User root. Jan 16 21:03:59 api-int.lab.ocpipi.lan sudo[13887]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get machines --all-namespaces -o json Jan 16 21:03:59 api-int.lab.ocpipi.lan sudo[13859]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:59.724151 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-etc-ssl-certs\") pod \"05c96ce8daffad47cf2b15e2a67753ec\" (UID: \"05c96ce8daffad47cf2b15e2a67753ec\") " Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:59.724328 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-kubeconfig\") pod \"05c96ce8daffad47cf2b15e2a67753ec\" (UID: \"05c96ce8daffad47cf2b15e2a67753ec\") " Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:59.724435 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-kubeconfig" (OuterVolumeSpecName: "kubeconfig") pod "05c96ce8daffad47cf2b15e2a67753ec" (UID: "05c96ce8daffad47cf2b15e2a67753ec"). InnerVolumeSpecName "kubeconfig". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:59.724533 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "05c96ce8daffad47cf2b15e2a67753ec" (UID: "05c96ce8daffad47cf2b15e2a67753ec"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:03:59 api-int.lab.ocpipi.lan systemd[1]: Started Session c13 of User root. Jan 16 21:03:59 api-int.lab.ocpipi.lan sudo[13887]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:03:59 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:03:59.807410398Z" level=info msg="Removed container d0175af05ba73d648c2b3062a202d575bed3916b71d96d4a4e25e90ec8b9fcb3: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/cluster-policy-controller" id=0c43879f-fe72-48de-ba76-12eac84e0af7 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:59.813510 2579 scope.go:115] "RemoveContainer" containerID="14037eeba10a1b747479911dd868e3167adaad0a3361b3f5be818e4a800280dc" Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:59.824542 2579 reconciler_common.go:300] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-etc-ssl-certs\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:03:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:03:59.824677 2579 reconciler_common.go:300] "Volume detached for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-kubeconfig\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:03:59 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:03:59.825515732Z" level=info msg="Removing container: 14037eeba10a1b747479911dd868e3167adaad0a3361b3f5be818e4a800280dc" id=2c538951-f682-4ad1-a636-fbb5b95456fc name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:04:00 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:00.017510304Z" level=info msg="Removed container 14037eeba10a1b747479911dd868e3167adaad0a3361b3f5be818e4a800280dc: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/kube-controller-manager" id=2c538951-f682-4ad1-a636-fbb5b95456fc name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:00.018373 2579 scope.go:115] "RemoveContainer" containerID="f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2" Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:04:00.031680 2579 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2\": container with ID starting with f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2 not found: ID does not exist" containerID="f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2" Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:00.031774 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2} err="failed to get container status \"f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2\": rpc error: code = NotFound desc = could not find container \"f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2\": container with ID starting with f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2 not found: ID does not exist" Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:00.031808 2579 scope.go:115] "RemoveContainer" containerID="d0175af05ba73d648c2b3062a202d575bed3916b71d96d4a4e25e90ec8b9fcb3" Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:04:00.033657 2579 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0175af05ba73d648c2b3062a202d575bed3916b71d96d4a4e25e90ec8b9fcb3\": container with ID starting with d0175af05ba73d648c2b3062a202d575bed3916b71d96d4a4e25e90ec8b9fcb3 not found: ID does not exist" containerID="d0175af05ba73d648c2b3062a202d575bed3916b71d96d4a4e25e90ec8b9fcb3" Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:00.033728 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:d0175af05ba73d648c2b3062a202d575bed3916b71d96d4a4e25e90ec8b9fcb3} err="failed to get container status \"d0175af05ba73d648c2b3062a202d575bed3916b71d96d4a4e25e90ec8b9fcb3\": rpc error: code = NotFound desc = could not find container \"d0175af05ba73d648c2b3062a202d575bed3916b71d96d4a4e25e90ec8b9fcb3\": container with ID starting with d0175af05ba73d648c2b3062a202d575bed3916b71d96d4a4e25e90ec8b9fcb3 not found: ID does not exist" Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:00.033768 2579 scope.go:115] "RemoveContainer" containerID="14037eeba10a1b747479911dd868e3167adaad0a3361b3f5be818e4a800280dc" Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:04:00.034813 2579 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14037eeba10a1b747479911dd868e3167adaad0a3361b3f5be818e4a800280dc\": container with ID starting with 14037eeba10a1b747479911dd868e3167adaad0a3361b3f5be818e4a800280dc not found: ID does not exist" containerID="14037eeba10a1b747479911dd868e3167adaad0a3361b3f5be818e4a800280dc" Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:00.034860 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:14037eeba10a1b747479911dd868e3167adaad0a3361b3f5be818e4a800280dc} err="failed to get container status \"14037eeba10a1b747479911dd868e3167adaad0a3361b3f5be818e4a800280dc\": rpc error: code = NotFound desc = could not find container \"14037eeba10a1b747479911dd868e3167adaad0a3361b3f5be818e4a800280dc\": container with ID starting with 14037eeba10a1b747479911dd868e3167adaad0a3361b3f5be818e4a800280dc not found: ID does not exist" Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:00.034879 2579 scope.go:115] "RemoveContainer" containerID="f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2" Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:00.050087 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2} err="failed to get container status \"f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2\": rpc error: code = NotFound desc = could not find container \"f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2\": container with ID starting with f548d3ccb41e4dce848701339c85fa6f4e6bf4121755f5d6fdbd7c287cb3a0d2 not found: ID does not exist" Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:00.050188 2579 scope.go:115] "RemoveContainer" containerID="d0175af05ba73d648c2b3062a202d575bed3916b71d96d4a4e25e90ec8b9fcb3" Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:00.080659 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:d0175af05ba73d648c2b3062a202d575bed3916b71d96d4a4e25e90ec8b9fcb3} err="failed to get container status \"d0175af05ba73d648c2b3062a202d575bed3916b71d96d4a4e25e90ec8b9fcb3\": rpc error: code = NotFound desc = could not find container \"d0175af05ba73d648c2b3062a202d575bed3916b71d96d4a4e25e90ec8b9fcb3\": container with ID starting with d0175af05ba73d648c2b3062a202d575bed3916b71d96d4a4e25e90ec8b9fcb3 not found: ID does not exist" Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:00.080755 2579 scope.go:115] "RemoveContainer" containerID="14037eeba10a1b747479911dd868e3167adaad0a3361b3f5be818e4a800280dc" Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:00.088237 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:14037eeba10a1b747479911dd868e3167adaad0a3361b3f5be818e4a800280dc} err="failed to get container status \"14037eeba10a1b747479911dd868e3167adaad0a3361b3f5be818e4a800280dc\": rpc error: code = NotFound desc = could not find container \"14037eeba10a1b747479911dd868e3167adaad0a3361b3f5be818e4a800280dc\": container with ID starting with 14037eeba10a1b747479911dd868e3167adaad0a3361b3f5be818e4a800280dc not found: ID does not exist" Jan 16 21:04:00 api-int.lab.ocpipi.lan sudo[13976]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get namespaces -o json Jan 16 21:04:00 api-int.lab.ocpipi.lan systemd[1]: Started Session c14 of User root. Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:00.191010 2579 scope.go:115] "RemoveContainer" containerID="0788d090c8866cdce69b0836680b2f097ddf00276c15b5fb80e2d55e2c7e6c87" Jan 16 21:04:00 api-int.lab.ocpipi.lan sudo[13976]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:04:00 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:00.207825573Z" level=info msg="Removing container: 0788d090c8866cdce69b0836680b2f097ddf00276c15b5fb80e2d55e2c7e6c87" id=d394af2d-d3e9-4596-9923-ab4d9034f3cb name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:04:00 api-int.lab.ocpipi.lan sudo[13876]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get kubecontrollermanager -o json Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:00.232872 2579 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_bootstrap-cluster-version-operator-localhost.localdomain_05c96ce8daffad47cf2b15e2a67753ec/cluster-version-operator/1.log" Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:00.233086 2579 generic.go:334] "Generic (PLEG): container finished" podID=05c96ce8daffad47cf2b15e2a67753ec containerID="f76c54ce310b345591d6fea03791bc15e056e84296e47c1c755a0852c10f2981" exitCode=0 Jan 16 21:04:00 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-8b936f14bbfc00917e3db73c11f2d347cd97b31ea528fc8f66378aa18b9ce380-merged.mount: Deactivated successfully. Jan 16 21:04:00 api-int.lab.ocpipi.lan sudo[13934]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get machineconfigpools -o json Jan 16 21:04:00 api-int.lab.ocpipi.lan systemd[1]: Started Session c15 of User root. Jan 16 21:04:00 api-int.lab.ocpipi.lan systemd[1]: Started Session c16 of User root. Jan 16 21:04:00 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:00.366769710Z" level=info msg="Removed container 0788d090c8866cdce69b0836680b2f097ddf00276c15b5fb80e2d55e2c7e6c87: openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain/cloud-credential-operator" id=d394af2d-d3e9-4596-9923-ab4d9034f3cb name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:00.381208 2579 scope.go:115] "RemoveContainer" containerID="f76c54ce310b345591d6fea03791bc15e056e84296e47c1c755a0852c10f2981" Jan 16 21:04:00 api-int.lab.ocpipi.lan systemd[1]: Removed slice libcontainer container kubepods-burstable-podb8b0f2012ce2b145220be181d7a5aa55.slice. Jan 16 21:04:00 api-int.lab.ocpipi.lan systemd[1]: kubepods-burstable-podb8b0f2012ce2b145220be181d7a5aa55.slice: Consumed 16.965s CPU time. Jan 16 21:04:00 api-int.lab.ocpipi.lan sudo[13876]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:04:00 api-int.lab.ocpipi.lan systemd[1]: Removed slice libcontainer container kubepods-besteffort-pod05c96ce8daffad47cf2b15e2a67753ec.slice. Jan 16 21:04:00 api-int.lab.ocpipi.lan systemd[1]: kubepods-besteffort-pod05c96ce8daffad47cf2b15e2a67753ec.slice: Consumed 1min 8.654s CPU time. Jan 16 21:04:00 api-int.lab.ocpipi.lan systemd[1]: Removed slice libcontainer container kubepods-besteffort-poda6238b9f1f3a2f2bd2b4b1b0c7962bdd.slice. Jan 16 21:04:00 api-int.lab.ocpipi.lan systemd[1]: kubepods-besteffort-poda6238b9f1f3a2f2bd2b4b1b0c7962bdd.slice: Consumed 6.895s CPU time. Jan 16 21:04:00 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:00.482117534Z" level=info msg="Removing container: f76c54ce310b345591d6fea03791bc15e056e84296e47c1c755a0852c10f2981" id=a0eb9b8a-7c61-4bd3-a168-a0fdb45edc65 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:04:00 api-int.lab.ocpipi.lan sudo[13997]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get nodes -o json Jan 16 21:04:00 api-int.lab.ocpipi.lan sudo[13934]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:04:00 api-int.lab.ocpipi.lan sudo[14014]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get openshiftapiserver -o json Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:00.527444 2579 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=05c96ce8daffad47cf2b15e2a67753ec path="/var/lib/kubelet/pods/05c96ce8daffad47cf2b15e2a67753ec/volumes" Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:00.528346 2579 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=a6238b9f1f3a2f2bd2b4b1b0c7962bdd path="/var/lib/kubelet/pods/a6238b9f1f3a2f2bd2b4b1b0c7962bdd/volumes" Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:00.528821 2579 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=b8b0f2012ce2b145220be181d7a5aa55 path="/var/lib/kubelet/pods/b8b0f2012ce2b145220be181d7a5aa55/volumes" Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:00.529338 2579 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=c3db590e56a311b869092b2d6b1724e5 path="/var/lib/kubelet/pods/c3db590e56a311b869092b2d6b1724e5/volumes" Jan 16 21:04:00 api-int.lab.ocpipi.lan sudo[13955]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get machineconfigs -o json Jan 16 21:04:00 api-int.lab.ocpipi.lan systemd[1]: Started Session c17 of User root. Jan 16 21:04:00 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:00.569152616Z" level=info msg="Removed container f76c54ce310b345591d6fea03791bc15e056e84296e47c1c755a0852c10f2981: openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain/cluster-version-operator" id=a0eb9b8a-7c61-4bd3-a168-a0fdb45edc65 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:00.571129 2579 scope.go:115] "RemoveContainer" containerID="90a34620cf7fa31e2700acd6399c77d91e517493b7a2e628fda8f544e7a0b88d" Jan 16 21:04:00 api-int.lab.ocpipi.lan systemd[1]: Started Session c18 of User root. Jan 16 21:04:00 api-int.lab.ocpipi.lan sudo[14029]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get rolebindings --all-namespaces -o json Jan 16 21:04:00 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:00.596644151Z" level=info msg="Removing container: 90a34620cf7fa31e2700acd6399c77d91e517493b7a2e628fda8f544e7a0b88d" id=f7c6afe7-7deb-4a04-a544-92236b5e2e52 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:04:00 api-int.lab.ocpipi.lan systemd[1]: Started Session c19 of User root. Jan 16 21:04:00 api-int.lab.ocpipi.lan sudo[14014]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:04:00 api-int.lab.ocpipi.lan sudo[14053]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get secrets --all-namespaces Jan 16 21:04:00 api-int.lab.ocpipi.lan sudo[13997]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:04:00 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-878db1d177ef3b3c4fef54018d3dde7a50089c1bca38a73c24ef4202855a9548-merged.mount: Deactivated successfully. Jan 16 21:04:00 api-int.lab.ocpipi.lan sudo[14036]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get roles --all-namespaces -o json Jan 16 21:04:00 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:00.757271870Z" level=info msg="Removed container 90a34620cf7fa31e2700acd6399c77d91e517493b7a2e628fda8f544e7a0b88d: openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain/cluster-version-operator" id=f7c6afe7-7deb-4a04-a544-92236b5e2e52 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:00.758170 2579 scope.go:115] "RemoveContainer" containerID="f76c54ce310b345591d6fea03791bc15e056e84296e47c1c755a0852c10f2981" Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:04:00.765539 2579 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f76c54ce310b345591d6fea03791bc15e056e84296e47c1c755a0852c10f2981\": container with ID starting with f76c54ce310b345591d6fea03791bc15e056e84296e47c1c755a0852c10f2981 not found: ID does not exist" containerID="f76c54ce310b345591d6fea03791bc15e056e84296e47c1c755a0852c10f2981" Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:00.765709 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:f76c54ce310b345591d6fea03791bc15e056e84296e47c1c755a0852c10f2981} err="failed to get container status \"f76c54ce310b345591d6fea03791bc15e056e84296e47c1c755a0852c10f2981\": rpc error: code = NotFound desc = could not find container \"f76c54ce310b345591d6fea03791bc15e056e84296e47c1c755a0852c10f2981\": container with ID starting with f76c54ce310b345591d6fea03791bc15e056e84296e47c1c755a0852c10f2981 not found: ID does not exist" Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:00.765740 2579 scope.go:115] "RemoveContainer" containerID="90a34620cf7fa31e2700acd6399c77d91e517493b7a2e628fda8f544e7a0b88d" Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:04:00.767865 2579 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90a34620cf7fa31e2700acd6399c77d91e517493b7a2e628fda8f544e7a0b88d\": container with ID starting with 90a34620cf7fa31e2700acd6399c77d91e517493b7a2e628fda8f544e7a0b88d not found: ID does not exist" containerID="90a34620cf7fa31e2700acd6399c77d91e517493b7a2e628fda8f544e7a0b88d" Jan 16 21:04:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:00.768030 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:90a34620cf7fa31e2700acd6399c77d91e517493b7a2e628fda8f544e7a0b88d} err="failed to get container status \"90a34620cf7fa31e2700acd6399c77d91e517493b7a2e628fda8f544e7a0b88d\": rpc error: code = NotFound desc = could not find container \"90a34620cf7fa31e2700acd6399c77d91e517493b7a2e628fda8f544e7a0b88d\": container with ID starting with 90a34620cf7fa31e2700acd6399c77d91e517493b7a2e628fda8f544e7a0b88d not found: ID does not exist" Jan 16 21:04:00 api-int.lab.ocpipi.lan sudo[14019]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get pods --all-namespaces -o json Jan 16 21:04:00 api-int.lab.ocpipi.lan sudo[13955]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:04:00 api-int.lab.ocpipi.lan systemd[1]: Started Session c20 of User root. Jan 16 21:04:00 api-int.lab.ocpipi.lan systemd[1]: Started Session c21 of User root. Jan 16 21:04:00 api-int.lab.ocpipi.lan systemd[1]: Started Session c22 of User root. Jan 16 21:04:00 api-int.lab.ocpipi.lan systemd[1]: Started Session c23 of User root. Jan 16 21:04:00 api-int.lab.ocpipi.lan sudo[14075]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get secrets --all-namespaces -o=custom-columns=NAMESPACE:.metadata.namespace,NAME:.metadata.name,TYPE:.type,ANNOTATIONS:.metadata.annotations Jan 16 21:04:00 api-int.lab.ocpipi.lan sudo[14029]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:04:00 api-int.lab.ocpipi.lan sudo[14053]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:04:00 api-int.lab.ocpipi.lan sudo[14036]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:04:00 api-int.lab.ocpipi.lan sudo[14019]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:04:00 api-int.lab.ocpipi.lan systemd[1]: Started Session c24 of User root. Jan 16 21:04:00 api-int.lab.ocpipi.lan sudo[14075]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:04:01 api-int.lab.ocpipi.lan systemd[1]: bootkube.service: Main process exited, code=exited, status=1/FAILURE Jan 16 21:04:01 api-int.lab.ocpipi.lan systemd[1]: bootkube.service: Failed with result 'exit-code'. Jan 16 21:04:01 api-int.lab.ocpipi.lan systemd[1]: bootkube.service: Consumed 2min 58.438s CPU time. Jan 16 21:04:01 api-int.lab.ocpipi.lan sudo[13775]: pam_unix(sudo:session): session closed for user root Jan 16 21:04:01 api-int.lab.ocpipi.lan systemd[1]: session-c5.scope: Deactivated successfully. Jan 16 21:04:02 api-int.lab.ocpipi.lan sudo[13723]: pam_unix(sudo:session): session closed for user root Jan 16 21:04:02 api-int.lab.ocpipi.lan systemd[1]: session-c3.scope: Deactivated successfully. Jan 16 21:04:02 api-int.lab.ocpipi.lan sudo[13824]: pam_unix(sudo:session): session closed for user root Jan 16 21:04:02 api-int.lab.ocpipi.lan systemd[1]: session-c9.scope: Deactivated successfully. Jan 16 21:04:02 api-int.lab.ocpipi.lan sudo[13727]: pam_unix(sudo:session): session closed for user root Jan 16 21:04:02 api-int.lab.ocpipi.lan systemd[1]: session-c1.scope: Deactivated successfully. Jan 16 21:04:02 api-int.lab.ocpipi.lan sudo[13737]: pam_unix(sudo:session): session closed for user root Jan 16 21:04:02 api-int.lab.ocpipi.lan systemd[1]: session-c4.scope: Deactivated successfully. Jan 16 21:04:03 api-int.lab.ocpipi.lan sudo[13871]: pam_unix(sudo:session): session closed for user root Jan 16 21:04:03 api-int.lab.ocpipi.lan systemd[1]: session-c10.scope: Deactivated successfully. Jan 16 21:04:03 api-int.lab.ocpipi.lan sudo[13887]: pam_unix(sudo:session): session closed for user root Jan 16 21:04:03 api-int.lab.ocpipi.lan systemd[1]: session-c13.scope: Deactivated successfully. Jan 16 21:04:03 api-int.lab.ocpipi.lan sudo[13731]: pam_unix(sudo:session): session closed for user root Jan 16 21:04:03 api-int.lab.ocpipi.lan systemd[1]: session-c2.scope: Deactivated successfully. Jan 16 21:04:03 api-int.lab.ocpipi.lan sudo[13934]: pam_unix(sudo:session): session closed for user root Jan 16 21:04:03 api-int.lab.ocpipi.lan systemd[1]: session-c16.scope: Deactivated successfully. Jan 16 21:04:03 api-int.lab.ocpipi.lan sudo[13757]: pam_unix(sudo:session): session closed for user root Jan 16 21:04:03 api-int.lab.ocpipi.lan systemd[1]: session-c8.scope: Deactivated successfully. Jan 16 21:04:03 api-int.lab.ocpipi.lan sudo[14053]: pam_unix(sudo:session): session closed for user root Jan 16 21:04:03 api-int.lab.ocpipi.lan systemd[1]: session-c21.scope: Deactivated successfully. Jan 16 21:04:03 api-int.lab.ocpipi.lan sudo[13955]: pam_unix(sudo:session): session closed for user root Jan 16 21:04:03 api-int.lab.ocpipi.lan systemd[1]: session-c19.scope: Deactivated successfully. Jan 16 21:04:03 api-int.lab.ocpipi.lan sudo[13876]: pam_unix(sudo:session): session closed for user root Jan 16 21:04:03 api-int.lab.ocpipi.lan systemd[1]: session-c15.scope: Deactivated successfully. Jan 16 21:04:03 api-int.lab.ocpipi.lan sudo[13750]: pam_unix(sudo:session): session closed for user root Jan 16 21:04:03 api-int.lab.ocpipi.lan systemd[1]: session-c7.scope: Deactivated successfully. Jan 16 21:04:03 api-int.lab.ocpipi.lan sudo[13997]: pam_unix(sudo:session): session closed for user root Jan 16 21:04:03 api-int.lab.ocpipi.lan systemd[1]: session-c18.scope: Deactivated successfully. Jan 16 21:04:04 api-int.lab.ocpipi.lan sudo[13844]: pam_unix(sudo:session): session closed for user root Jan 16 21:04:04 api-int.lab.ocpipi.lan systemd[1]: session-c11.scope: Deactivated successfully. Jan 16 21:04:04 api-int.lab.ocpipi.lan sudo[14075]: pam_unix(sudo:session): session closed for user root Jan 16 21:04:04 api-int.lab.ocpipi.lan systemd[1]: session-c24.scope: Deactivated successfully. Jan 16 21:04:04 api-int.lab.ocpipi.lan sudo[14019]: pam_unix(sudo:session): session closed for user root Jan 16 21:04:04 api-int.lab.ocpipi.lan systemd[1]: session-c23.scope: Deactivated successfully. Jan 16 21:04:04 api-int.lab.ocpipi.lan sudo[14014]: pam_unix(sudo:session): session closed for user root Jan 16 21:04:04 api-int.lab.ocpipi.lan systemd[1]: session-c17.scope: Deactivated successfully. Jan 16 21:04:04 api-int.lab.ocpipi.lan sudo[13859]: pam_unix(sudo:session): session closed for user root Jan 16 21:04:04 api-int.lab.ocpipi.lan sudo[14029]: pam_unix(sudo:session): session closed for user root Jan 16 21:04:04 api-int.lab.ocpipi.lan systemd[1]: session-c20.scope: Deactivated successfully. Jan 16 21:04:04 api-int.lab.ocpipi.lan systemd[1]: session-c12.scope: Deactivated successfully. Jan 16 21:04:04 api-int.lab.ocpipi.lan sudo[13976]: pam_unix(sudo:session): session closed for user root Jan 16 21:04:04 api-int.lab.ocpipi.lan systemd[1]: session-c14.scope: Deactivated successfully. Jan 16 21:04:04 api-int.lab.ocpipi.lan sudo[14036]: pam_unix(sudo:session): session closed for user root Jan 16 21:04:04 api-int.lab.ocpipi.lan systemd[1]: session-c22.scope: Deactivated successfully. Jan 16 21:04:04 api-int.lab.ocpipi.lan sudo[13781]: pam_unix(sudo:session): session closed for user root Jan 16 21:04:04 api-int.lab.ocpipi.lan systemd[1]: session-c6.scope: Deactivated successfully. Jan 16 21:04:04 api-int.lab.ocpipi.lan systemd[1]: session-c6.scope: Consumed 1.018s CPU time. Jan 16 21:04:05 api-int.lab.ocpipi.lan approve-csr.sh[14222]: No resources found Jan 16 21:04:06 api-int.lab.ocpipi.lan systemd[1]: bootkube.service: Scheduled restart job, restart counter is at 1. Jan 16 21:04:06 api-int.lab.ocpipi.lan systemd[1]: Stopped Bootstrap a Kubernetes cluster. Jan 16 21:04:06 api-int.lab.ocpipi.lan systemd[1]: bootkube.service: Consumed 2min 58.438s CPU time. Jan 16 21:04:06 api-int.lab.ocpipi.lan systemd[1]: Started Bootstrap a Kubernetes cluster. Jan 16 21:04:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:07.698318 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:04:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:07.708882 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:04:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:07.709262 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:04:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:07.709321 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:04:08 api-int.lab.ocpipi.lan systemd[1]: run-runc-a6ec663ce72ed74f6055433329bd411efa6dfee752b7a91ffcb0c14bbc154810-runc.InZM0n.mount: Deactivated successfully. Jan 16 21:04:08 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container a6ec663ce72ed74f6055433329bd411efa6dfee752b7a91ffcb0c14bbc154810. Jan 16 21:04:08 api-int.lab.ocpipi.lan systemd[1]: libpod-a6ec663ce72ed74f6055433329bd411efa6dfee752b7a91ffcb0c14bbc154810.scope: Deactivated successfully. Jan 16 21:04:09 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-9c7d6f6c6d19646bdffa3cd1efb4d09914bedd2a3d976a2773f33f673452186d-merged.mount: Deactivated successfully. Jan 16 21:04:09 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a6ec663ce72ed74f6055433329bd411efa6dfee752b7a91ffcb0c14bbc154810-userdata-shm.mount: Deactivated successfully. Jan 16 21:04:09 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 3f9ba1c712b3f998241eca6427a7310628c0cefa52016a15565dcfb34c9d2c27. Jan 16 21:04:10 api-int.lab.ocpipi.lan systemd[1]: libpod-3f9ba1c712b3f998241eca6427a7310628c0cefa52016a15565dcfb34c9d2c27.scope: Deactivated successfully. Jan 16 21:04:10 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3f9ba1c712b3f998241eca6427a7310628c0cefa52016a15565dcfb34c9d2c27-userdata-shm.mount: Deactivated successfully. Jan 16 21:04:10 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-30e94dbff9e1c72e44d4ee3d9c5fa42006672600784a70ea0026674916cdc2a4-merged.mount: Deactivated successfully. Jan 16 21:04:11 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 8050df27ad81b4c939ea710ca8d9ac172585631e312ba1b40521dd78e9f8d6b4. Jan 16 21:04:11 api-int.lab.ocpipi.lan systemd[1]: libpod-8050df27ad81b4c939ea710ca8d9ac172585631e312ba1b40521dd78e9f8d6b4.scope: Deactivated successfully. Jan 16 21:04:12 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8050df27ad81b4c939ea710ca8d9ac172585631e312ba1b40521dd78e9f8d6b4-userdata-shm.mount: Deactivated successfully. Jan 16 21:04:12 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-4e4819fcc965b02d7ee08497f19bc98db6fb84cd9d90622b9c94f66773dc5f8b-merged.mount: Deactivated successfully. Jan 16 21:04:12 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container c5b58bca571c41ea6cc2a762bb7eb41efb21fc6d80ddc8d4edf9aecdb03214c9. Jan 16 21:04:12 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:04:13 api-int.lab.ocpipi.lan systemd[1]: run-runc-c5b58bca571c41ea6cc2a762bb7eb41efb21fc6d80ddc8d4edf9aecdb03214c9-runc.sLt2Sx.mount: Deactivated successfully. Jan 16 21:04:13 api-int.lab.ocpipi.lan systemd[1]: libpod-c5b58bca571c41ea6cc2a762bb7eb41efb21fc6d80ddc8d4edf9aecdb03214c9.scope: Deactivated successfully. Jan 16 21:04:13 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c5b58bca571c41ea6cc2a762bb7eb41efb21fc6d80ddc8d4edf9aecdb03214c9-userdata-shm.mount: Deactivated successfully. Jan 16 21:04:13 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-a7443ea82472c7f5847d6468cf6b274425857b63e00d71131c0237fbc48b7f14-merged.mount: Deactivated successfully. Jan 16 21:04:14 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container db79bfa80d68957335babf99482fda9effd7e4582198b21efa3e1b317368a841. Jan 16 21:04:14 api-int.lab.ocpipi.lan systemd[1]: Stopping User Manager for UID 0... Jan 16 21:04:14 api-int.lab.ocpipi.lan systemd[13772]: Activating special unit Exit the Session... Jan 16 21:04:14 api-int.lab.ocpipi.lan systemd[13772]: Stopped target Main User Target. Jan 16 21:04:14 api-int.lab.ocpipi.lan systemd[13772]: Stopped target Basic System. Jan 16 21:04:14 api-int.lab.ocpipi.lan systemd[13772]: Stopped target Paths. Jan 16 21:04:14 api-int.lab.ocpipi.lan systemd[13772]: Stopped target Sockets. Jan 16 21:04:14 api-int.lab.ocpipi.lan systemd[13772]: Stopped target Timers. Jan 16 21:04:14 api-int.lab.ocpipi.lan systemd[13772]: Stopped Daily Cleanup of User's Temporary Directories. Jan 16 21:04:14 api-int.lab.ocpipi.lan systemd[13772]: Closed D-Bus User Message Bus Socket. Jan 16 21:04:14 api-int.lab.ocpipi.lan systemd[13772]: Stopped Create User's Volatile Files and Directories. Jan 16 21:04:14 api-int.lab.ocpipi.lan systemd[13772]: Removed slice User Application Slice. Jan 16 21:04:14 api-int.lab.ocpipi.lan systemd[13772]: Reached target Shutdown. Jan 16 21:04:14 api-int.lab.ocpipi.lan systemd[13772]: Finished Exit the Session. Jan 16 21:04:14 api-int.lab.ocpipi.lan systemd[13772]: Reached target Exit the Session. Jan 16 21:04:14 api-int.lab.ocpipi.lan systemd[1]: user@0.service: Deactivated successfully. Jan 16 21:04:14 api-int.lab.ocpipi.lan systemd[1]: Stopped User Manager for UID 0. Jan 16 21:04:14 api-int.lab.ocpipi.lan systemd[1]: Stopping User Runtime Directory /run/user/0... Jan 16 21:04:14 api-int.lab.ocpipi.lan systemd[1]: run-user-0.mount: Deactivated successfully. Jan 16 21:04:14 api-int.lab.ocpipi.lan systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Jan 16 21:04:14 api-int.lab.ocpipi.lan systemd[1]: Stopped User Runtime Directory /run/user/0. Jan 16 21:04:14 api-int.lab.ocpipi.lan systemd[1]: libpod-db79bfa80d68957335babf99482fda9effd7e4582198b21efa3e1b317368a841.scope: Deactivated successfully. Jan 16 21:04:14 api-int.lab.ocpipi.lan systemd[1]: Removed slice User Slice of UID 0. Jan 16 21:04:14 api-int.lab.ocpipi.lan systemd[1]: user-0.slice: Consumed 13.168s CPU time. Jan 16 21:04:15 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-db79bfa80d68957335babf99482fda9effd7e4582198b21efa3e1b317368a841-userdata-shm.mount: Deactivated successfully. Jan 16 21:04:15 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-785a13805c4c0f84b92016b45e7d6b260a053c64c2d2d6ea816a4fd5ac316f7b-merged.mount: Deactivated successfully. Jan 16 21:04:16 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container b51a9c6b7674269ac59dd26a2a75acc7447da190c54dbb90a35b712305cb35ff. Jan 16 21:04:16 api-int.lab.ocpipi.lan systemd[1]: libpod-b51a9c6b7674269ac59dd26a2a75acc7447da190c54dbb90a35b712305cb35ff.scope: Deactivated successfully. Jan 16 21:04:16 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b51a9c6b7674269ac59dd26a2a75acc7447da190c54dbb90a35b712305cb35ff-userdata-shm.mount: Deactivated successfully. Jan 16 21:04:16 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-7b157d8e947f5f96e900da92a9d714a7e9ffe7d32704abeb3753ccf40c6a8f7d-merged.mount: Deactivated successfully. Jan 16 21:04:17 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 5255de96bd8d5726dba9943c664980e989bd087d0948470db75f7b84ed6bf860. Jan 16 21:04:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:17.788476 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:04:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:17.803528 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:04:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:17.803821 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:04:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:17.803886 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:04:18 api-int.lab.ocpipi.lan systemd[1]: libpod-5255de96bd8d5726dba9943c664980e989bd087d0948470db75f7b84ed6bf860.scope: Deactivated successfully. Jan 16 21:04:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:18.467185 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:04:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:18.477820 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:04:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:18.478731 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:04:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:18.479129 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:04:18 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5255de96bd8d5726dba9943c664980e989bd087d0948470db75f7b84ed6bf860-userdata-shm.mount: Deactivated successfully. Jan 16 21:04:18 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.UDlJz3.mount: Deactivated successfully. Jan 16 21:04:18 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-ff0ba21aa26e47b047b817089959699660ddbb9c119199e1653ef809661f0646-merged.mount: Deactivated successfully. Jan 16 21:04:19 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container caef61d15cc422d47d303d53e1b7943e28203bbc45c219866518e8fb2f2a3399. Jan 16 21:04:19 api-int.lab.ocpipi.lan systemd[1]: libpod-caef61d15cc422d47d303d53e1b7943e28203bbc45c219866518e8fb2f2a3399.scope: Deactivated successfully. Jan 16 21:04:20 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-caef61d15cc422d47d303d53e1b7943e28203bbc45c219866518e8fb2f2a3399-userdata-shm.mount: Deactivated successfully. Jan 16 21:04:20 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-abd7d31191662abef273ea09870cc7c084ec2c11d90ad19c0be993464b40c4b1-merged.mount: Deactivated successfully. Jan 16 21:04:20 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 477b34432771a95b2a6c4ec1ca68dc0dd19a8d94a293497b6eb5cf2a071d0fa2. Jan 16 21:04:21 api-int.lab.ocpipi.lan systemd[1]: libpod-477b34432771a95b2a6c4ec1ca68dc0dd19a8d94a293497b6eb5cf2a071d0fa2.scope: Deactivated successfully. Jan 16 21:04:21 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-cb27fcc605928eec5944f6b91406990ad3574ea09ef39bb96954d0530578333c-merged.mount: Deactivated successfully. Jan 16 21:04:21 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-477b34432771a95b2a6c4ec1ca68dc0dd19a8d94a293497b6eb5cf2a071d0fa2-userdata-shm.mount: Deactivated successfully. Jan 16 21:04:22 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 0d597834ac8637295265615e975f4b160fbc946e2e5099e30dd16b32cf71d0bf. Jan 16 21:04:22 api-int.lab.ocpipi.lan systemd[1]: libpod-0d597834ac8637295265615e975f4b160fbc946e2e5099e30dd16b32cf71d0bf.scope: Deactivated successfully. Jan 16 21:04:23 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0d597834ac8637295265615e975f4b160fbc946e2e5099e30dd16b32cf71d0bf-userdata-shm.mount: Deactivated successfully. Jan 16 21:04:23 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-d24c83e1751f2f0e75ebd540e306eaefb312d5e6637872cc2572724afdd0c349-merged.mount: Deactivated successfully. Jan 16 21:04:23 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 5677bba46def6af04cfaa36b48d7164e14b4de49ba9d1f24f1ca5733d507f81e. Jan 16 21:04:24 api-int.lab.ocpipi.lan systemd[1]: libpod-5677bba46def6af04cfaa36b48d7164e14b4de49ba9d1f24f1ca5733d507f81e.scope: Deactivated successfully. Jan 16 21:04:24 api-int.lab.ocpipi.lan systemd[1]: run-runc-5677bba46def6af04cfaa36b48d7164e14b4de49ba9d1f24f1ca5733d507f81e-runc.fgHhu7.mount: Deactivated successfully. Jan 16 21:04:24 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5677bba46def6af04cfaa36b48d7164e14b4de49ba9d1f24f1ca5733d507f81e-userdata-shm.mount: Deactivated successfully. Jan 16 21:04:24 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-67cccab4448af1e704e408388dc7a1c7120b278b2a31834801bbd72f428e3bee-merged.mount: Deactivated successfully. Jan 16 21:04:24 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container bb96dacfbf20865f36007647cb32a8e90e20aa3c6bc4bf8d48418ea13b2a8256. Jan 16 21:04:25 api-int.lab.ocpipi.lan systemd[1]: libpod-bb96dacfbf20865f36007647cb32a8e90e20aa3c6bc4bf8d48418ea13b2a8256.scope: Deactivated successfully. Jan 16 21:04:25 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bb96dacfbf20865f36007647cb32a8e90e20aa3c6bc4bf8d48418ea13b2a8256-userdata-shm.mount: Deactivated successfully. Jan 16 21:04:25 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-226b2301dcf97267c166d99ff74432ee80f1e1f6484a978903a040e03d12ebed-merged.mount: Deactivated successfully. Jan 16 21:04:26 api-int.lab.ocpipi.lan approve-csr.sh[15157]: No resources found Jan 16 21:04:26 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 71bff5ee9736b55eeeb58737c20acb76238b2c5bacd7000c1f65985be9f33be3. Jan 16 21:04:27 api-int.lab.ocpipi.lan systemd[1]: libpod-71bff5ee9736b55eeeb58737c20acb76238b2c5bacd7000c1f65985be9f33be3.scope: Deactivated successfully. Jan 16 21:04:27 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-71bff5ee9736b55eeeb58737c20acb76238b2c5bacd7000c1f65985be9f33be3-userdata-shm.mount: Deactivated successfully. Jan 16 21:04:27 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-45e4235c582b83464f0b1388dcf55202f25cffb5995d0e724db678b0022d0153-merged.mount: Deactivated successfully. Jan 16 21:04:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:27.864222 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:04:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:27.877181 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:04:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:27.877285 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:04:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:27.877334 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:04:28 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 97dee6c4a8ea26160a1957f7ad2d4deac6896215fd4e7df53b53993f5332650b. Jan 16 21:04:28 api-int.lab.ocpipi.lan systemd[1]: libpod-97dee6c4a8ea26160a1957f7ad2d4deac6896215fd4e7df53b53993f5332650b.scope: Deactivated successfully. Jan 16 21:04:28 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.lkL4j7.mount: Deactivated successfully. Jan 16 21:04:28 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-97dee6c4a8ea26160a1957f7ad2d4deac6896215fd4e7df53b53993f5332650b-userdata-shm.mount: Deactivated successfully. Jan 16 21:04:28 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-5562648b57fd0316a317790ce5a165232e8b72cce94bca8a1f69ae4deed4c2f0-merged.mount: Deactivated successfully. Jan 16 21:04:29 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 9272244517fca4f8bc492b6a8a38fa70c8943535fd4b3f2c99fd8ffc7464a57d. Jan 16 21:04:30 api-int.lab.ocpipi.lan systemd[1]: libpod-9272244517fca4f8bc492b6a8a38fa70c8943535fd4b3f2c99fd8ffc7464a57d.scope: Deactivated successfully. Jan 16 21:04:30 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9272244517fca4f8bc492b6a8a38fa70c8943535fd4b3f2c99fd8ffc7464a57d-userdata-shm.mount: Deactivated successfully. Jan 16 21:04:30 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-b19f7263bfbe70a6897f7c34c7bfa3f922054bbe1fee79e67a321adf19b0d100-merged.mount: Deactivated successfully. Jan 16 21:04:33 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:04:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:33.466844 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:04:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:33.474502 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:04:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:33.474742 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:04:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:33.474802 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:04:33 api-int.lab.ocpipi.lan bootkube.sh[14240]: Check if API and API-Int URLs are resolvable during bootstrap Jan 16 21:04:33 api-int.lab.ocpipi.lan bootkube.sh[14240]: Checking if api.lab.ocpipi.lan of type API_URL is resolvable Jan 16 21:04:33 api-int.lab.ocpipi.lan bootkube.sh[14240]: Starting stage resolve-api-url Jan 16 21:04:33 api-int.lab.ocpipi.lan bootkube.sh[14240]: Successfully resolved API_URL api.lab.ocpipi.lan Jan 16 21:04:34 api-int.lab.ocpipi.lan bootkube.sh[14240]: Checking if api-int.lab.ocpipi.lan of type API_INT_URL is resolvable Jan 16 21:04:34 api-int.lab.ocpipi.lan bootkube.sh[14240]: Starting stage resolve-api-int-url Jan 16 21:04:34 api-int.lab.ocpipi.lan bootkube.sh[14240]: Successfully resolved API_INT_URL api-int.lab.ocpipi.lan Jan 16 21:04:35 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container c1e22948a05eae5de3d250ef98c063bb8ad0615c2f739def87d776e29d8f80f1. Jan 16 21:04:36 api-int.lab.ocpipi.lan bootkube.sh[15472]: https://localhost:2379 is healthy: successfully committed proposal: took = 75.567777ms Jan 16 21:04:36 api-int.lab.ocpipi.lan systemd[1]: libpod-c1e22948a05eae5de3d250ef98c063bb8ad0615c2f739def87d776e29d8f80f1.scope: Deactivated successfully. Jan 16 21:04:36 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c1e22948a05eae5de3d250ef98c063bb8ad0615c2f739def87d776e29d8f80f1-userdata-shm.mount: Deactivated successfully. Jan 16 21:04:36 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-0be30acaccc8827b2eaa4cc048f0ba7fd9e8353b3e1c6b598c7639914760ec21-merged.mount: Deactivated successfully. Jan 16 21:04:36 api-int.lab.ocpipi.lan bootkube.sh[14240]: Starting cluster-bootstrap... Jan 16 21:04:37 api-int.lab.ocpipi.lan systemd[1]: run-runc-3232abd9ed1814fde82a2012389e5479b4bf5a09df8d026c4d9e28bb75b0447b-runc.bkeGYy.mount: Deactivated successfully. Jan 16 21:04:37 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 3232abd9ed1814fde82a2012389e5479b4bf5a09df8d026c4d9e28bb75b0447b. Jan 16 21:04:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:37.931348 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:04:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:37.938428 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:04:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:37.938716 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:04:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:37.938778 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:04:38 api-int.lab.ocpipi.lan bootkube.sh[15560]: Starting temporary bootstrap control plane... Jan 16 21:04:38 api-int.lab.ocpipi.lan bootkube.sh[15560]: Waiting up to 20m0s for the Kubernetes API Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.187494 2579 kubelet.go:2425] "SyncLoop ADD" source="file" pods=[openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain] Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.187882 2579 topology_manager.go:212] "Topology Admit Handler" podUID=05c96ce8daffad47cf2b15e2a67753ec podNamespace="openshift-cluster-version" podName="bootstrap-cluster-version-operator-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:04:38.188268 2579 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="c3db590e56a311b869092b2d6b1724e5" containerName="cluster-policy-controller" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.188349 2579 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3db590e56a311b869092b2d6b1724e5" containerName="cluster-policy-controller" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:04:38.188390 2579 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="c3db590e56a311b869092b2d6b1724e5" containerName="kube-controller-manager" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.188420 2579 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3db590e56a311b869092b2d6b1724e5" containerName="kube-controller-manager" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:04:38.188449 2579 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b8b0f2012ce2b145220be181d7a5aa55" containerName="kube-scheduler" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.188475 2579 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8b0f2012ce2b145220be181d7a5aa55" containerName="kube-scheduler" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:04:38.188507 2579 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="a6238b9f1f3a2f2bd2b4b1b0c7962bdd" containerName="cloud-credential-operator" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.188533 2579 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6238b9f1f3a2f2bd2b4b1b0c7962bdd" containerName="cloud-credential-operator" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:04:38.188565 2579 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="1cb3be1f2df5273e9b77f7050777bcbe" containerName="setup" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.188700 2579 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb3be1f2df5273e9b77f7050777bcbe" containerName="setup" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:04:38.188743 2579 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="1cb3be1f2df5273e9b77f7050777bcbe" containerName="kube-apiserver-insecure-readyz" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.188773 2579 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb3be1f2df5273e9b77f7050777bcbe" containerName="kube-apiserver-insecure-readyz" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:04:38.188801 2579 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="1cb3be1f2df5273e9b77f7050777bcbe" containerName="kube-apiserver" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.188826 2579 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb3be1f2df5273e9b77f7050777bcbe" containerName="kube-apiserver" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.189216 2579 memory_manager.go:346] "RemoveStaleState removing state" podUID="b8b0f2012ce2b145220be181d7a5aa55" containerName="kube-scheduler" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.189262 2579 memory_manager.go:346] "RemoveStaleState removing state" podUID="1cb3be1f2df5273e9b77f7050777bcbe" containerName="kube-apiserver-insecure-readyz" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.189312 2579 memory_manager.go:346] "RemoveStaleState removing state" podUID="c3db590e56a311b869092b2d6b1724e5" containerName="cluster-policy-controller" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.189388 2579 memory_manager.go:346] "RemoveStaleState removing state" podUID="c3db590e56a311b869092b2d6b1724e5" containerName="kube-controller-manager" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.189425 2579 memory_manager.go:346] "RemoveStaleState removing state" podUID="1cb3be1f2df5273e9b77f7050777bcbe" containerName="kube-apiserver" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.189473 2579 memory_manager.go:346] "RemoveStaleState removing state" podUID="a6238b9f1f3a2f2bd2b4b1b0c7962bdd" containerName="cloud-credential-operator" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.189671 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.194243 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.194316 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.194358 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.197192 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-kubeconfig\") pod \"bootstrap-cluster-version-operator-localhost.localdomain\" (UID: \"05c96ce8daffad47cf2b15e2a67753ec\") " pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.197329 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-etc-ssl-certs\") pod \"bootstrap-cluster-version-operator-localhost.localdomain\" (UID: \"05c96ce8daffad47cf2b15e2a67753ec\") " pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.203830 2579 kubelet.go:2425] "SyncLoop ADD" source="file" pods=[openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain] Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.206190 2579 topology_manager.go:212] "Topology Admit Handler" podUID=a6238b9f1f3a2f2bd2b4b1b0c7962bdd podNamespace="openshift-cloud-credential-operator" podName="cloud-credential-operator-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:04:38.208187 2579 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="c3db590e56a311b869092b2d6b1724e5" containerName="kube-controller-manager" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.208232 2579 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3db590e56a311b869092b2d6b1724e5" containerName="kube-controller-manager" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.208296 2579 memory_manager.go:346] "RemoveStaleState removing state" podUID="c3db590e56a311b869092b2d6b1724e5" containerName="kube-controller-manager" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.208343 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.212324 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.212533 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.212699 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.216128 2579 kubelet.go:2425] "SyncLoop ADD" source="file" pods=[openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain] Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.243241 2579 kubelet.go:2425] "SyncLoop ADD" source="file" pods=[kube-system/bootstrap-kube-controller-manager-localhost.localdomain] Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.243762 2579 topology_manager.go:212] "Topology Admit Handler" podUID=c3db590e56a311b869092b2d6b1724e5 podNamespace="kube-system" podName="bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.244753 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:04:38 api-int.lab.ocpipi.lan systemd[1]: Created slice libcontainer container kubepods-besteffort-pod05c96ce8daffad47cf2b15e2a67753ec.slice. Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.253832 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.254171 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.254232 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.265399 2579 kubelet.go:2425] "SyncLoop ADD" source="file" pods=[kube-system/bootstrap-kube-scheduler-localhost.localdomain] Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.265573 2579 topology_manager.go:212] "Topology Admit Handler" podUID=b8b0f2012ce2b145220be181d7a5aa55 podNamespace="kube-system" podName="bootstrap-kube-scheduler-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.265814 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.270785 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.271562 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.271712 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.279493 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.284831 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.285525 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.286225 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:04:38 api-int.lab.ocpipi.lan systemd[1]: Created slice libcontainer container kubepods-besteffort-poda6238b9f1f3a2f2bd2b4b1b0c7962bdd.slice. Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.299700 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-etc-ssl-certs\") pod \"bootstrap-cluster-version-operator-localhost.localdomain\" (UID: \"05c96ce8daffad47cf2b15e2a67753ec\") " pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.301748 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-kubeconfig\") pod \"bootstrap-cluster-version-operator-localhost.localdomain\" (UID: \"05c96ce8daffad47cf2b15e2a67753ec\") " pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.302261 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-kubeconfig\") pod \"bootstrap-cluster-version-operator-localhost.localdomain\" (UID: \"05c96ce8daffad47cf2b15e2a67753ec\") " pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.302304 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-etc-ssl-certs\") pod \"bootstrap-cluster-version-operator-localhost.localdomain\" (UID: \"05c96ce8daffad47cf2b15e2a67753ec\") " pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.311875 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.317156 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.317742 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.318756 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:04:38 api-int.lab.ocpipi.lan systemd[1]: Created slice libcontainer container kubepods-burstable-podc3db590e56a311b869092b2d6b1724e5.slice. Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.390401 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.394861 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.395294 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.395353 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.403095 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a6238b9f1f3a2f2bd2b4b1b0c7962bdd-secrets\") pod \"cloud-credential-operator-localhost.localdomain\" (UID: \"a6238b9f1f3a2f2bd2b4b1b0c7962bdd\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.403304 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-secrets\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.403823 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-config\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.404813 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.405141 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-logs\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.405239 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-secrets\") pod \"bootstrap-kube-scheduler-localhost.localdomain\" (UID: \"b8b0f2012ce2b145220be181d7a5aa55\") " pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.405315 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-logs\") pod \"bootstrap-kube-scheduler-localhost.localdomain\" (UID: \"b8b0f2012ce2b145220be181d7a5aa55\") " pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.405677 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan systemd[1]: Created slice libcontainer container kubepods-burstable-podb8b0f2012ce2b145220be181d7a5aa55.slice. Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.435726 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.449549 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.449738 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.449788 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.512225 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.513094 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.514073 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-logs\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.514307 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-logs\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.514410 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-secrets\") pod \"bootstrap-kube-scheduler-localhost.localdomain\" (UID: \"b8b0f2012ce2b145220be181d7a5aa55\") " pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.514506 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-logs\") pod \"bootstrap-kube-scheduler-localhost.localdomain\" (UID: \"b8b0f2012ce2b145220be181d7a5aa55\") " pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.514710 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.514822 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a6238b9f1f3a2f2bd2b4b1b0c7962bdd-secrets\") pod \"cloud-credential-operator-localhost.localdomain\" (UID: \"a6238b9f1f3a2f2bd2b4b1b0c7962bdd\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.515425 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-logs\") pod \"bootstrap-kube-scheduler-localhost.localdomain\" (UID: \"b8b0f2012ce2b145220be181d7a5aa55\") " pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.515561 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.515851 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a6238b9f1f3a2f2bd2b4b1b0c7962bdd-secrets\") pod \"cloud-credential-operator-localhost.localdomain\" (UID: \"a6238b9f1f3a2f2bd2b4b1b0c7962bdd\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.516175 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-secrets\") pod \"bootstrap-kube-scheduler-localhost.localdomain\" (UID: \"b8b0f2012ce2b145220be181d7a5aa55\") " pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.516309 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-secrets\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.516413 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-config\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.516520 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-config\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.516756 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-secrets\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.588384 2579 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.589829537Z" level=info msg="Stopping pod sandbox: 70686be8a2d87683a00828f4233d059638689db262cbef7d341c1f46aeb3fd09" id=de0df515-1256-4c08-a49f-c31f4c9d110e name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.590766693Z" level=info msg="Stopped pod sandbox (already stopped): 70686be8a2d87683a00828f4233d059638689db262cbef7d341c1f46aeb3fd09" id=de0df515-1256-4c08-a49f-c31f4c9d110e name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.594741224Z" level=info msg="Running pod sandbox: openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain/POD" id=7f283d85-4c07-444e-ba5f-b2d99ac9a36a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.595251438Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.622533 2579 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.630868194Z" level=info msg="Stopping pod sandbox: 26024c8016ef3e2119dd507f560533c94af57eb36863fae575a12ac36b7c6b00" id=82fde2de-4566-4f9f-947d-da3c668666ee name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.631551859Z" level=info msg="Stopped pod sandbox (already stopped): 26024c8016ef3e2119dd507f560533c94af57eb36863fae575a12ac36b7c6b00" id=82fde2de-4566-4f9f-947d-da3c668666ee name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.635490939Z" level=info msg="Running pod sandbox: openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain/POD" id=2b2e2f98-4d82-493f-a835-72eaa88ae31d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.638755823Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: W0116 21:04:38.681144 2579 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05c96ce8daffad47cf2b15e2a67753ec.slice/crio-f7a6f707f3bda3601ec15d7bd6975ac503d8a121077b16831d7ae849142883fd WatchSource:0}: Error finding container f7a6f707f3bda3601ec15d7bd6975ac503d8a121077b16831d7ae849142883fd: Status 404 returned error can't find the container with id f7a6f707f3bda3601ec15d7bd6975ac503d8a121077b16831d7ae849142883fd Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.690792413Z" level=info msg="Ran pod sandbox f7a6f707f3bda3601ec15d7bd6975ac503d8a121077b16831d7ae849142883fd with infra container: openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain/POD" id=7f283d85-4c07-444e-ba5f-b2d99ac9a36a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.698279 2579 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.701182708Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa" id=8890a9ea-00b2-4126-b4ab-d243f315c1bf name=/runtime.v1.ImageService/ImageStatus Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.702399152Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:40e15091a793905eb63a02d951105fc5c5904bfb294f8004c052ac950c9ac44a,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa],Size_:522846560,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=8890a9ea-00b2-4126-b4ab-d243f315c1bf name=/runtime.v1.ImageService/ImageStatus Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.703141991Z" level=info msg="Stopping pod sandbox: 79c10015fd162b8e62ecb33ebeccbd5e476b9a518fb7eb7c00b519d5bb0eb934" id=d6789246-a4c0-4908-848f-772099f6cf28 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.703373531Z" level=info msg="Stopped pod sandbox (already stopped): 79c10015fd162b8e62ecb33ebeccbd5e476b9a518fb7eb7c00b519d5bb0eb934" id=d6789246-a4c0-4908-848f-772099f6cf28 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.704445 2579 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.704896 2579 provider.go:82] Docker config file not found: couldn't find valid .dockercfg after checking in [/var/lib/kubelet /] Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: W0116 21:04:38.705415 2579 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6238b9f1f3a2f2bd2b4b1b0c7962bdd.slice/crio-d33fcfd348cf2c3a24fa9aa431b71f77fb2351c6c87e2ab3eb4270f280959111 WatchSource:0}: Error finding container d33fcfd348cf2c3a24fa9aa431b71f77fb2351c6c87e2ab3eb4270f280959111: Status 404 returned error can't find the container with id d33fcfd348cf2c3a24fa9aa431b71f77fb2351c6c87e2ab3eb4270f280959111 Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.706143170Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa" id=8998f5ef-e843-491d-a3b7-d585465aadc5 name=/runtime.v1.ImageService/PullImage Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.709188539Z" level=info msg="Ran pod sandbox d33fcfd348cf2c3a24fa9aa431b71f77fb2351c6c87e2ab3eb4270f280959111 with infra container: openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain/POD" id=2b2e2f98-4d82-493f-a835-72eaa88ae31d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.714125049Z" level=info msg="Running pod sandbox: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/POD" id=71cf20a2-4eae-42ae-85c0-f3c96bf365ed name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.715553309Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.719553132Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1cccbc92c83dd170dea8cb72a09e96facba21f3fdf5e3dd3f3009796c481cd67" id=92d5d1fe-68fb-4e9e-b40c-32ef60fa4223 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.720330790Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:90bdc1613647030f9fe768ad330e8ff0dca1cc04bf002dc32974238943125b9c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1cccbc92c83dd170dea8cb72a09e96facba21f3fdf5e3dd3f3009796c481cd67],Size_:704416475,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=92d5d1fe-68fb-4e9e-b40c-32ef60fa4223 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.723396231Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1cccbc92c83dd170dea8cb72a09e96facba21f3fdf5e3dd3f3009796c481cd67" id=7c388116-10a3-4826-8e8c-1a719da15c53 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.727150196Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:90bdc1613647030f9fe768ad330e8ff0dca1cc04bf002dc32974238943125b9c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1cccbc92c83dd170dea8cb72a09e96facba21f3fdf5e3dd3f3009796c481cd67],Size_:704416475,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=7c388116-10a3-4826-8e8c-1a719da15c53 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.727162812Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa\"" Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.735729987Z" level=info msg="Creating container: openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain/cloud-credential-operator" id=d848f08f-54cb-4e3f-a392-ba821c6feba4 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:38.751794 2579 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.752531221Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.758527476Z" level=info msg="Stopping pod sandbox: 8ef4b7210274a6b52b1f275b2b88575b44667f9376ae93b8eea1a279639e87b6" id=09a91137-a4b0-4d6e-b304-ba1909434d61 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.758901623Z" level=info msg="Stopped pod sandbox (already stopped): 8ef4b7210274a6b52b1f275b2b88575b44667f9376ae93b8eea1a279639e87b6" id=09a91137-a4b0-4d6e-b304-ba1909434d61 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.768406156Z" level=info msg="Running pod sandbox: kube-system/bootstrap-kube-scheduler-localhost.localdomain/POD" id=b08e4f72-39d8-49dd-be63-3a73158fabf5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.768747794Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:04:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: W0116 21:04:38.833559 2579 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3db590e56a311b869092b2d6b1724e5.slice/crio-df7414d7cd6ee869f535f4228fa7b7b23f6ac8632001d1dce75dcc25250e3f1b WatchSource:0}: Error finding container df7414d7cd6ee869f535f4228fa7b7b23f6ac8632001d1dce75dcc25250e3f1b: Status 404 returned error can't find the container with id df7414d7cd6ee869f535f4228fa7b7b23f6ac8632001d1dce75dcc25250e3f1b Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.835159009Z" level=info msg="Ran pod sandbox df7414d7cd6ee869f535f4228fa7b7b23f6ac8632001d1dce75dcc25250e3f1b with infra container: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/POD" id=71cf20a2-4eae-42ae-85c0-f3c96bf365ed name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.839131619Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=2feff763-bac1-43f4-bc1a-73967b07d830 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.840722141Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=2feff763-bac1-43f4-bc1a-73967b07d830 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.843431785Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=ad620e35-27d9-453a-bc47-270ae77e4c58 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.844434804Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=ad620e35-27d9-453a-bc47-270ae77e4c58 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.850696862Z" level=info msg="Creating container: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/kube-controller-manager" id=ec60414f-564f-4476-94d8-d10ae1258977 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.851172836Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.854714137Z" level=info msg="Ran pod sandbox 698793765b36d11ab57ecfa5b37f206a0c00f023a38088216f8e7b16931b26a4 with infra container: kube-system/bootstrap-kube-scheduler-localhost.localdomain/POD" id=b08e4f72-39d8-49dd-be63-3a73158fabf5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.860303144Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=c06a0353-0304-45c3-974f-f1ba145aca11 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.868724155Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=c06a0353-0304-45c3-974f-f1ba145aca11 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.872698184Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=5a39eb6b-2dca-4203-936f-f4d42c44dfa7 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.874720279Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=5a39eb6b-2dca-4203-936f-f4d42c44dfa7 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.878317370Z" level=info msg="Creating container: kube-system/bootstrap-kube-scheduler-localhost.localdomain/kube-scheduler" id=f4b2edac-f928-4863-b1a3-0757930153f1 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:04:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:38.878862749Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:04:39 api-int.lab.ocpipi.lan bootkube.sh[15560]: API is up Jan 16 21:04:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:39.664265 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" event=&{ID:c3db590e56a311b869092b2d6b1724e5 Type:ContainerStarted Data:df7414d7cd6ee869f535f4228fa7b7b23f6ac8632001d1dce75dcc25250e3f1b} Jan 16 21:04:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:39.684368 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" event=&{ID:b8b0f2012ce2b145220be181d7a5aa55 Type:ContainerStarted Data:698793765b36d11ab57ecfa5b37f206a0c00f023a38088216f8e7b16931b26a4} Jan 16 21:04:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:39.691569 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" event=&{ID:a6238b9f1f3a2f2bd2b4b1b0c7962bdd Type:ContainerStarted Data:d33fcfd348cf2c3a24fa9aa431b71f77fb2351c6c87e2ab3eb4270f280959111} Jan 16 21:04:39 api-int.lab.ocpipi.lan systemd[1]: Started crio-conmon-fa576909424de31254e1c4275c6fb0976b920ec554d33aefeb4a8e3f46464ffa.scope. Jan 16 21:04:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:39.716783 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" event=&{ID:05c96ce8daffad47cf2b15e2a67753ec Type:ContainerStarted Data:f7a6f707f3bda3601ec15d7bd6975ac503d8a121077b16831d7ae849142883fd} Jan 16 21:04:39 api-int.lab.ocpipi.lan systemd[1]: Started crio-conmon-53b59e3ddb3be8d72b8c498096ed5c4ebc9db93cc0f39805548940648f1df026.scope. Jan 16 21:04:39 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container fa576909424de31254e1c4275c6fb0976b920ec554d33aefeb4a8e3f46464ffa. Jan 16 21:04:39 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_00_cluster-version-operator_00_namespace.yaml" namespaces.v1./openshift-cluster-version -n as it already exists Jan 16 21:04:39 api-int.lab.ocpipi.lan systemd[1]: Started crio-conmon-5caf0d427b79aad6bc0b06abe3c0667fd38eb99b83c2fa58f5e60ffc61d0dbe4.scope. Jan 16 21:04:39 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_00_cluster-version-operator_01_adminack_configmap.yaml" configmaps.v1./admin-acks -n openshift-config as it already exists Jan 16 21:04:39 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_00_cluster-version-operator_01_admingate_configmap.yaml" configmaps.v1./admin-gates -n openshift-config-managed as it already exists Jan 16 21:04:39 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 53b59e3ddb3be8d72b8c498096ed5c4ebc9db93cc0f39805548940648f1df026. Jan 16 21:04:39 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_00_cluster-version-operator_01_clusteroperator.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/clusteroperators.config.openshift.io -n as it already exists Jan 16 21:04:39 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_00_cluster-version-operator_01_clusterversion.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/clusterversions.config.openshift.io -n as it already exists Jan 16 21:04:39 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 5caf0d427b79aad6bc0b06abe3c0667fd38eb99b83c2fa58f5e60ffc61d0dbe4. Jan 16 21:04:39 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_00_cluster-version-operator_02_roles.yaml" clusterrolebindings.v1.rbac.authorization.k8s.io/cluster-version-operator -n as it already exists Jan 16 21:04:39 api-int.lab.ocpipi.lan systemd[1]: run-runc-5caf0d427b79aad6bc0b06abe3c0667fd38eb99b83c2fa58f5e60ffc61d0dbe4-runc.5ctcie.mount: Deactivated successfully. Jan 16 21:04:39 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_00_cluster-version-operator_03_deployment.yaml" deployments.v1.apps/cluster-version-operator -n openshift-cluster-version as it already exists Jan 16 21:04:39 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_00_namespace-openshift-infra.yaml" namespaces.v1./openshift-infra -n as it already exists Jan 16 21:04:39 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_03_authorization-openshift_01_rolebindingrestriction.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/rolebindingrestrictions.authorization.openshift.io -n as it already exists Jan 16 21:04:39 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_03_config-operator_01_proxy.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/proxies.config.openshift.io -n as it already exists Jan 16 21:04:39 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:39.957336901Z" level=info msg="Created container fa576909424de31254e1c4275c6fb0976b920ec554d33aefeb4a8e3f46464ffa: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/kube-controller-manager" id=ec60414f-564f-4476-94d8-d10ae1258977 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:04:39 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:39.961340753Z" level=info msg="Starting container: fa576909424de31254e1c4275c6fb0976b920ec554d33aefeb4a8e3f46464ffa" id=ae8bdb3d-417b-4cae-bdd6-baacc6c19d20 name=/runtime.v1.RuntimeService/StartContainer Jan 16 21:04:39 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:39.995116003Z" level=info msg="Started container" PID=15664 containerID=fa576909424de31254e1c4275c6fb0976b920ec554d33aefeb4a8e3f46464ffa description=kube-system/bootstrap-kube-controller-manager-localhost.localdomain/kube-controller-manager id=ae8bdb3d-417b-4cae-bdd6-baacc6c19d20 name=/runtime.v1.RuntimeService/StartContainer sandboxID=df7414d7cd6ee869f535f4228fa7b7b23f6ac8632001d1dce75dcc25250e3f1b Jan 16 21:04:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:40.000228065Z" level=info msg="Created container 5caf0d427b79aad6bc0b06abe3c0667fd38eb99b83c2fa58f5e60ffc61d0dbe4: kube-system/bootstrap-kube-scheduler-localhost.localdomain/kube-scheduler" id=f4b2edac-f928-4863-b1a3-0757930153f1 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:04:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:40.006101238Z" level=info msg="Starting container: 5caf0d427b79aad6bc0b06abe3c0667fd38eb99b83c2fa58f5e60ffc61d0dbe4" id=3695e834-fbd4-4229-9484-8d1cd6f1f28a name=/runtime.v1.RuntimeService/StartContainer Jan 16 21:04:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:40.018876498Z" level=info msg="Created container 53b59e3ddb3be8d72b8c498096ed5c4ebc9db93cc0f39805548940648f1df026: openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain/cloud-credential-operator" id=d848f08f-54cb-4e3f-a392-ba821c6feba4 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:04:40 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_03_quota-openshift_01_clusterresourcequota.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/clusterresourcequotas.quota.openshift.io -n as it already exists Jan 16 21:04:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:40.032741091Z" level=info msg="Starting container: 53b59e3ddb3be8d72b8c498096ed5c4ebc9db93cc0f39805548940648f1df026" id=bbe6a8f4-45d2-486b-8bea-a2407d4b1e54 name=/runtime.v1.RuntimeService/StartContainer Jan 16 21:04:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:40.045221692Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:384b0d1665ce12136ede1c708c4542d12eac1f788528f8bc77cb52d871057437" id=d2843d1b-a506-4ea7-8869-de84209e5195 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:04:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:40.046038899Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c6ce09d75120c7c75b95c587ffc4a7a3f18cc099961eab2583e449102365e5b0,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:384b0d1665ce12136ede1c708c4542d12eac1f788528f8bc77cb52d871057437],Size_:535546139,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=d2843d1b-a506-4ea7-8869-de84209e5195 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:04:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:40.047846103Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:384b0d1665ce12136ede1c708c4542d12eac1f788528f8bc77cb52d871057437" id=06ab1207-d5df-47c4-b7ec-33ae438134c1 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:04:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:40.048660300Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c6ce09d75120c7c75b95c587ffc4a7a3f18cc099961eab2583e449102365e5b0,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:384b0d1665ce12136ede1c708c4542d12eac1f788528f8bc77cb52d871057437],Size_:535546139,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=06ab1207-d5df-47c4-b7ec-33ae438134c1 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:04:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:40.053794812Z" level=info msg="Creating container: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/cluster-policy-controller" id=96124cfe-7941-4ffb-86b6-8612055156e1 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:04:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:40.054414659Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:04:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:40.064135104Z" level=info msg="Started container" PID=15684 containerID=5caf0d427b79aad6bc0b06abe3c0667fd38eb99b83c2fa58f5e60ffc61d0dbe4 description=kube-system/bootstrap-kube-scheduler-localhost.localdomain/kube-scheduler id=3695e834-fbd4-4229-9484-8d1cd6f1f28a name=/runtime.v1.RuntimeService/StartContainer sandboxID=698793765b36d11ab57ecfa5b37f206a0c00f023a38088216f8e7b16931b26a4 Jan 16 21:04:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:40.105088738Z" level=info msg="Started container" PID=15682 containerID=53b59e3ddb3be8d72b8c498096ed5c4ebc9db93cc0f39805548940648f1df026 description=openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain/cloud-credential-operator id=bbe6a8f4-45d2-486b-8bea-a2407d4b1e54 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d33fcfd348cf2c3a24fa9aa431b71f77fb2351c6c87e2ab3eb4270f280959111 Jan 16 21:04:40 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_03_security-openshift_01_scc.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/securitycontextconstraints.security.openshift.io -n as it already exists Jan 16 21:04:40 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_03_securityinternal-openshift_02_rangeallocation.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/rangeallocations.security.internal.openshift.io -n as it already exists Jan 16 21:04:40 api-int.lab.ocpipi.lan systemd[1]: Started crio-conmon-180e2c10ea2886645a4dfde1732419123fed304db011ebf0e606c741b83af3fe.scope. Jan 16 21:04:40 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 180e2c10ea2886645a4dfde1732419123fed304db011ebf0e606c741b83af3fe. Jan 16 21:04:40 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_10_config-operator_01_apiserver-Default.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/apiservers.config.openshift.io -n as it already exists Jan 16 21:04:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:40.696361079Z" level=info msg="Created container 180e2c10ea2886645a4dfde1732419123fed304db011ebf0e606c741b83af3fe: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/cluster-policy-controller" id=96124cfe-7941-4ffb-86b6-8612055156e1 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:04:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:40.697528413Z" level=info msg="Starting container: 180e2c10ea2886645a4dfde1732419123fed304db011ebf0e606c741b83af3fe" id=b09e81c0-45cb-4dda-94a1-b11109bec358 name=/runtime.v1.RuntimeService/StartContainer Jan 16 21:04:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:40.728143 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" event=&{ID:a6238b9f1f3a2f2bd2b4b1b0c7962bdd Type:ContainerStarted Data:53b59e3ddb3be8d72b8c498096ed5c4ebc9db93cc0f39805548940648f1df026} Jan 16 21:04:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:40.728686 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:04:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:40.733840 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:04:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:40.734022 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:04:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:40.734051 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:04:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:40.737746 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" event=&{ID:c3db590e56a311b869092b2d6b1724e5 Type:ContainerStarted Data:fa576909424de31254e1c4275c6fb0976b920ec554d33aefeb4a8e3f46464ffa} Jan 16 21:04:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:40.743049 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" event=&{ID:b8b0f2012ce2b145220be181d7a5aa55 Type:ContainerStarted Data:5caf0d427b79aad6bc0b06abe3c0667fd38eb99b83c2fa58f5e60ffc61d0dbe4} Jan 16 21:04:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:40.743438 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:04:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:40.744196277Z" level=info msg="Started container" PID=15786 containerID=180e2c10ea2886645a4dfde1732419123fed304db011ebf0e606c741b83af3fe description=kube-system/bootstrap-kube-controller-manager-localhost.localdomain/cluster-policy-controller id=b09e81c0-45cb-4dda-94a1-b11109bec358 name=/runtime.v1.RuntimeService/StartContainer sandboxID=df7414d7cd6ee869f535f4228fa7b7b23f6ac8632001d1dce75dcc25250e3f1b Jan 16 21:04:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:40.747891 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:04:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:40.748058 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:04:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:40.748088 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:04:40 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_10_config-operator_01_authentication.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/authentications.config.openshift.io -n as it already exists Jan 16 21:04:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:40.970659002Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa" id=8998f5ef-e843-491d-a3b7-d585465aadc5 name=/runtime.v1.ImageService/PullImage Jan 16 21:04:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:40.973751084Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa" id=765db7db-55a4-4fee-b1bc-cc430bfd500d name=/runtime.v1.ImageService/ImageStatus Jan 16 21:04:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:40.974680882Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:40e15091a793905eb63a02d951105fc5c5904bfb294f8004c052ac950c9ac44a,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa],Size_:522846560,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=765db7db-55a4-4fee-b1bc-cc430bfd500d name=/runtime.v1.ImageService/ImageStatus Jan 16 21:04:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:40.976635747Z" level=info msg="Creating container: openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain/cluster-version-operator" id=c0b94964-d550-4a1b-83fd-8be6562467fd name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:04:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:40.977113443Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:04:41 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_10_config-operator_01_console.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/consoles.config.openshift.io -n as it already exists Jan 16 21:04:41 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_10_config-operator_01_dns-Default.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/dnses.config.openshift.io -n as it already exists Jan 16 21:04:41 api-int.lab.ocpipi.lan systemd[1]: Started crio-conmon-64055b9c804821058ad482716725362f03c181fd5e1434f6414b91ee00f0671f.scope. Jan 16 21:04:41 api-int.lab.ocpipi.lan systemd[1]: run-runc-64055b9c804821058ad482716725362f03c181fd5e1434f6414b91ee00f0671f-runc.DOGViq.mount: Deactivated successfully. Jan 16 21:04:41 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_10_config-operator_01_featuregate.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/featuregates.config.openshift.io -n as it already exists Jan 16 21:04:41 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 64055b9c804821058ad482716725362f03c181fd5e1434f6414b91ee00f0671f. Jan 16 21:04:41 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_10_config-operator_01_image.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/images.config.openshift.io -n as it already exists Jan 16 21:04:41 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:41.615816354Z" level=info msg="Created container 64055b9c804821058ad482716725362f03c181fd5e1434f6414b91ee00f0671f: openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain/cluster-version-operator" id=c0b94964-d550-4a1b-83fd-8be6562467fd name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:04:41 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:41.617658426Z" level=info msg="Starting container: 64055b9c804821058ad482716725362f03c181fd5e1434f6414b91ee00f0671f" id=ffba6cd7-a313-475d-af80-9326be12be53 name=/runtime.v1.RuntimeService/StartContainer Jan 16 21:04:41 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:41.654320446Z" level=info msg="Started container" PID=15839 containerID=64055b9c804821058ad482716725362f03c181fd5e1434f6414b91ee00f0671f description=openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain/cluster-version-operator id=ffba6cd7-a313-475d-af80-9326be12be53 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f7a6f707f3bda3601ec15d7bd6975ac503d8a121077b16831d7ae849142883fd Jan 16 21:04:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:41.749777 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" event=&{ID:05c96ce8daffad47cf2b15e2a67753ec Type:ContainerStarted Data:64055b9c804821058ad482716725362f03c181fd5e1434f6414b91ee00f0671f} Jan 16 21:04:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:41.750232 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:04:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:41.753295 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:04:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:41.753723 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:04:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:41.753758 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:04:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:41.758458 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:04:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:41.759773 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:04:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:41.760798 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" event=&{ID:c3db590e56a311b869092b2d6b1724e5 Type:ContainerStarted Data:180e2c10ea2886645a4dfde1732419123fed304db011ebf0e606c741b83af3fe} Jan 16 21:04:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:41.761289 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:04:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:41.762886 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:04:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:41.763149 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:04:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:41.763226 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:04:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:41.763680 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:04:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:41.763765 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:04:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:41.763791 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:04:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:41.765311 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:04:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:41.765419 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:04:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:41.765450 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:04:41 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_10_config-operator_01_imagecontentpolicy.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/imagecontentpolicies.config.openshift.io -n as it already exists Jan 16 21:04:42 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_10_config-operator_01_imagecontentsourcepolicy.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/imagecontentsourcepolicies.operator.openshift.io -n as it already exists Jan 16 21:04:42 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_10_config-operator_01_imagedigestmirrorset.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/imagedigestmirrorsets.config.openshift.io -n as it already exists Jan 16 21:04:42 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_10_config-operator_01_imagetagmirrorset.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/imagetagmirrorsets.config.openshift.io -n as it already exists Jan 16 21:04:42 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_10_config-operator_01_infrastructure-Default.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/infrastructures.config.openshift.io -n as it already exists Jan 16 21:04:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:42.763837 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:04:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:42.765757 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:04:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:42.768844 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:04:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:42.769010 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:04:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:42.769041 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:04:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:42.768897 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:04:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:42.769217 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:04:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:42.769243 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:04:42 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_10_config-operator_01_ingress.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/ingresses.config.openshift.io -n as it already exists Jan 16 21:04:43 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_10_config-operator_01_network.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/networks.config.openshift.io -n as it already exists Jan 16 21:04:43 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_10_config-operator_01_node.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/nodes.config.openshift.io -n as it already exists Jan 16 21:04:43 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_10_config-operator_01_oauth.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/oauths.config.openshift.io -n as it already exists Jan 16 21:04:43 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_10_config-operator_01_project.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/projects.config.openshift.io -n as it already exists Jan 16 21:04:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:43.792219 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 21:04:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:43.792392 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 21:04:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:43.792440 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 21:04:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:43.792494 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 21:04:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:43.792540 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 21:04:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:43.792669 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 21:04:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:43.792725 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 21:04:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:43.792766 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 21:04:43 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_10_config-operator_01_scheduler.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/schedulers.config.openshift.io -n as it already exists Jan 16 21:04:44 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_20_kube-apiserver-operator_00_cr-scc-anyuid.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:anyuid -n as it already exists Jan 16 21:04:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:44.147296649Z" level=info msg="Stopping pod sandbox: 79c10015fd162b8e62ecb33ebeccbd5e476b9a518fb7eb7c00b519d5bb0eb934" id=ab2e4c28-3151-4ae3-ad19-82aa913babc7 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:04:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:44.148904640Z" level=info msg="Stopped pod sandbox (already stopped): 79c10015fd162b8e62ecb33ebeccbd5e476b9a518fb7eb7c00b519d5bb0eb934" id=ab2e4c28-3151-4ae3-ad19-82aa913babc7 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:04:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:44.153718046Z" level=info msg="Removing pod sandbox: 79c10015fd162b8e62ecb33ebeccbd5e476b9a518fb7eb7c00b519d5bb0eb934" id=1e2096fa-142d-4bd9-beb7-99da0334b166 name=/runtime.v1.RuntimeService/RemovePodSandbox Jan 16 21:04:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:44.172799773Z" level=info msg="Removed pod sandbox: 79c10015fd162b8e62ecb33ebeccbd5e476b9a518fb7eb7c00b519d5bb0eb934" id=1e2096fa-142d-4bd9-beb7-99da0334b166 name=/runtime.v1.RuntimeService/RemovePodSandbox Jan 16 21:04:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:44.176759105Z" level=info msg="Stopping pod sandbox: 26024c8016ef3e2119dd507f560533c94af57eb36863fae575a12ac36b7c6b00" id=3a62f7d7-5da1-4cb0-9b69-606a1faedb84 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:04:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:44.178111106Z" level=info msg="Stopped pod sandbox (already stopped): 26024c8016ef3e2119dd507f560533c94af57eb36863fae575a12ac36b7c6b00" id=3a62f7d7-5da1-4cb0-9b69-606a1faedb84 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:04:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:44.179882242Z" level=info msg="Removing pod sandbox: 26024c8016ef3e2119dd507f560533c94af57eb36863fae575a12ac36b7c6b00" id=988eeb6b-8d1e-4f5e-856e-b0369ef9f074 name=/runtime.v1.RuntimeService/RemovePodSandbox Jan 16 21:04:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:44.210470899Z" level=info msg="Removed pod sandbox: 26024c8016ef3e2119dd507f560533c94af57eb36863fae575a12ac36b7c6b00" id=988eeb6b-8d1e-4f5e-856e-b0369ef9f074 name=/runtime.v1.RuntimeService/RemovePodSandbox Jan 16 21:04:44 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_20_kube-apiserver-operator_00_cr-scc-hostaccess.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:hostaccess -n as it already exists Jan 16 21:04:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:44.233872040Z" level=info msg="Stopping pod sandbox: 70686be8a2d87683a00828f4233d059638689db262cbef7d341c1f46aeb3fd09" id=bffc3688-971d-4936-ae8a-2ce2af3531e8 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:04:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:44.234342770Z" level=info msg="Stopped pod sandbox (already stopped): 70686be8a2d87683a00828f4233d059638689db262cbef7d341c1f46aeb3fd09" id=bffc3688-971d-4936-ae8a-2ce2af3531e8 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:04:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:44.236468716Z" level=info msg="Removing pod sandbox: 70686be8a2d87683a00828f4233d059638689db262cbef7d341c1f46aeb3fd09" id=68ba99cf-8aa9-4b15-9cd0-ef35b34a2088 name=/runtime.v1.RuntimeService/RemovePodSandbox Jan 16 21:04:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:04:44.270423516Z" level=info msg="Removed pod sandbox: 70686be8a2d87683a00828f4233d059638689db262cbef7d341c1f46aeb3fd09" id=68ba99cf-8aa9-4b15-9cd0-ef35b34a2088 name=/runtime.v1.RuntimeService/RemovePodSandbox Jan 16 21:04:44 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_20_kube-apiserver-operator_00_cr-scc-hostmount-anyuid.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:hostmount -n as it already exists Jan 16 21:04:44 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_20_kube-apiserver-operator_00_cr-scc-hostnetwork-v2.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:hostnetwork-v2 -n as it already exists Jan 16 21:04:44 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_20_kube-apiserver-operator_00_cr-scc-hostnetwork.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:hostnetwork -n as it already exists Jan 16 21:04:45 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_20_kube-apiserver-operator_00_cr-scc-nonroot-v2.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:nonroot-v2 -n as it already exists Jan 16 21:04:45 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_20_kube-apiserver-operator_00_cr-scc-nonroot.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:nonroot -n as it already exists Jan 16 21:04:45 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_20_kube-apiserver-operator_00_cr-scc-privileged.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:privileged -n as it already exists Jan 16 21:04:45 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_20_kube-apiserver-operator_00_cr-scc-restricted-v2.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:restricted-v2 -n as it already exists Jan 16 21:04:45 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_20_kube-apiserver-operator_00_cr-scc-restricted.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:restricted -n as it already exists Jan 16 21:04:46 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_20_kube-apiserver-operator_00_crb-systemauthenticated-scc-restricted-v2.yaml" clusterrolebindings.v1.rbac.authorization.k8s.io/system:openshift:scc:restricted-v2 -n as it already exists Jan 16 21:04:46 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_20_kube-apiserver-operator_00_scc-anyuid.yaml" securitycontextconstraints.v1.security.openshift.io/anyuid -n as it already exists Jan 16 21:04:46 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_20_kube-apiserver-operator_00_scc-hostaccess.yaml" securitycontextconstraints.v1.security.openshift.io/hostaccess -n as it already exists Jan 16 21:04:46 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_20_kube-apiserver-operator_00_scc-hostmount-anyuid.yaml" securitycontextconstraints.v1.security.openshift.io/hostmount-anyuid -n as it already exists Jan 16 21:04:46 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_20_kube-apiserver-operator_00_scc-hostnetwork-v2.yaml" securitycontextconstraints.v1.security.openshift.io/hostnetwork-v2 -n as it already exists Jan 16 21:04:47 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_20_kube-apiserver-operator_00_scc-hostnetwork.yaml" securitycontextconstraints.v1.security.openshift.io/hostnetwork -n as it already exists Jan 16 21:04:47 api-int.lab.ocpipi.lan approve-csr.sh[15880]: No resources found Jan 16 21:04:47 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_20_kube-apiserver-operator_00_scc-nonroot-v2.yaml" securitycontextconstraints.v1.security.openshift.io/nonroot-v2 -n as it already exists Jan 16 21:04:47 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_20_kube-apiserver-operator_00_scc-nonroot.yaml" securitycontextconstraints.v1.security.openshift.io/nonroot -n as it already exists Jan 16 21:04:47 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_20_kube-apiserver-operator_00_scc-privileged.yaml" securitycontextconstraints.v1.security.openshift.io/privileged -n as it already exists Jan 16 21:04:47 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_20_kube-apiserver-operator_00_scc-restricted-v2.yaml" securitycontextconstraints.v1.security.openshift.io/restricted-v2 -n as it already exists Jan 16 21:04:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:47.983359 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:04:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:47.993158 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:04:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:47.994294 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:04:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:47.994757 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:04:48 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0000_20_kube-apiserver-operator_00_scc-restricted.yaml" securitycontextconstraints.v1.security.openshift.io/restricted -n as it already exists Jan 16 21:04:48 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "0001_00_cluster-version-operator_03_service.yaml" services.v1./cluster-version-operator -n openshift-cluster-version as it already exists Jan 16 21:04:48 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "00_etcd-endpoints-cm.yaml" configmaps.v1./etcd-endpoints -n openshift-etcd as it already exists Jan 16 21:04:48 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "00_namespace-security-allocation-controller-clusterrole.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:controller:namespace-security-allocation-controller -n as it already exists Jan 16 21:04:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:48.699376 2579 kubelet.go:2529] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:04:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:48.700138 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:04:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:48.702571 2579 kubelet.go:2529] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:04:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:48.703734 2579 kubelet.go:2529] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:04:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:48.703894 2579 kubelet.go:2529] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:04:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:48.709233 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:04:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:48.709515 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:04:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:48.709758 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:04:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:48.738282 2579 kubelet.go:2529] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:04:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:48.746323 2579 kubelet.go:2529] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:04:48 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "00_namespace-security-allocation-controller-clusterrolebinding.yaml" clusterrolebindings.v1.rbac.authorization.k8s.io/system:openshift:controller:namespace-security-allocation-controller -n as it already exists Jan 16 21:04:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:48.827400 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:04:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:48.832267 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:04:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:48.833826 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:04:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:48.834512 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:04:49 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "00_openshift-etcd-ns.yaml" namespaces.v1./openshift-etcd -n as it already exists Jan 16 21:04:49 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "00_openshift-kube-apiserver-ns.yaml" namespaces.v1./openshift-kube-apiserver -n as it already exists Jan 16 21:04:49 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "00_openshift-kube-apiserver-operator-ns.yaml" namespaces.v1./openshift-kube-apiserver-operator -n as it already exists Jan 16 21:04:49 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "00_openshift-kube-controller-manager-ns.yaml" namespaces.v1./openshift-kube-controller-manager -n as it already exists Jan 16 21:04:49 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "00_openshift-kube-controller-manager-operator-ns.yaml" namespaces.v1./openshift-kube-controller-manager-operator -n as it already exists Jan 16 21:04:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:49.834247 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:04:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:49.839690 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:04:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:49.840034 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:04:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:49.840072 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:04:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:49.886315 2579 kubelet.go:2529] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:04:50 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "00_openshift-kube-scheduler-ns.yaml" namespaces.v1./openshift-kube-scheduler -n as it already exists Jan 16 21:04:50 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "00_podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:controller:privileged-namespaces-psa-label-syncer -n as it already exists Jan 16 21:04:50 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "00_podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml" clusterrolebindings.v1.rbac.authorization.k8s.io/system:openshift:controller:privileged-namespaces-psa-label-syncer -n as it already exists Jan 16 21:04:50 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "00_podsecurity-admission-label-syncer-controller-clusterrole.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:controller:podsecurity-admission-label-syncer-controller -n as it already exists Jan 16 21:04:50 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "00_podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml" clusterrolebindings.v1.rbac.authorization.k8s.io/system:openshift:controller:podsecurity-admission-label-syncer-controller -n as it already exists Jan 16 21:04:50 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:50.844274 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:04:50 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:50.850582 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:04:50 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:50.850869 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:04:50 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:50.851091 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:04:50 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:50.874886 2579 kubelet.go:2529] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:04:51 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "99_baremetal-provisioning-config.yaml" provisionings.v1alpha1.metal3.io/provisioning-configuration -n as it already exists Jan 16 21:04:51 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "99_feature-gate.yaml" featuregates.v1.config.openshift.io/cluster -n as it already exists Jan 16 21:04:51 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "99_kubeadmin-password-secret.yaml" secrets.v1./kubeadmin -n kube-system as it already exists Jan 16 21:04:51 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "99_openshift-cluster-api_host-bmc-secrets-0.yaml" secrets.v1./cp-1-bmc-secret -n openshift-machine-api as it already exists Jan 16 21:04:51 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "99_openshift-cluster-api_host-bmc-secrets-1.yaml" secrets.v1./cp-2-bmc-secret -n openshift-machine-api as it already exists Jan 16 21:04:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:51.847294 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:04:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:51.852821 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:04:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:51.852895 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:04:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:51.853094 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:04:52 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "99_openshift-cluster-api_host-bmc-secrets-2.yaml" secrets.v1./cp-3-bmc-secret -n openshift-machine-api as it already exists Jan 16 21:04:52 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "99_openshift-cluster-api_host-bmc-secrets-3.yaml" secrets.v1./w-1-bmc-secret -n openshift-machine-api as it already exists Jan 16 21:04:52 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "99_openshift-cluster-api_host-bmc-secrets-4.yaml" secrets.v1./w-2-bmc-secret -n openshift-machine-api as it already exists Jan 16 21:04:52 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "99_openshift-cluster-api_hosts-0.yaml" baremetalhosts.v1alpha1.metal3.io/cp-1 -n openshift-machine-api as it already exists Jan 16 21:04:52 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "99_openshift-cluster-api_hosts-1.yaml" baremetalhosts.v1alpha1.metal3.io/cp-2 -n openshift-machine-api as it already exists Jan 16 21:04:53 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "99_openshift-cluster-api_hosts-2.yaml" baremetalhosts.v1alpha1.metal3.io/cp-3 -n openshift-machine-api as it already exists Jan 16 21:04:53 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:04:53 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "99_openshift-cluster-api_hosts-3.yaml" baremetalhosts.v1alpha1.metal3.io/w-1 -n openshift-machine-api as it already exists Jan 16 21:04:53 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "99_openshift-cluster-api_hosts-4.yaml" baremetalhosts.v1alpha1.metal3.io/w-2 -n openshift-machine-api as it already exists Jan 16 21:04:53 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "99_openshift-cluster-api_master-machines-0.yaml" machines.v1beta1.machine.openshift.io/lab-wcpsl-master-0 -n openshift-machine-api as it already exists Jan 16 21:04:53 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "99_openshift-cluster-api_master-machines-1.yaml" machines.v1beta1.machine.openshift.io/lab-wcpsl-master-1 -n openshift-machine-api as it already exists Jan 16 21:04:54 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "99_openshift-cluster-api_master-machines-2.yaml" machines.v1beta1.machine.openshift.io/lab-wcpsl-master-2 -n openshift-machine-api as it already exists Jan 16 21:04:54 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "99_openshift-cluster-api_master-user-data-secret.yaml" secrets.v1./master-user-data-managed -n openshift-machine-api as it already exists Jan 16 21:04:54 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "99_openshift-cluster-api_worker-machineset-0.yaml" machinesets.v1beta1.machine.openshift.io/lab-wcpsl-worker-0 -n openshift-machine-api as it already exists Jan 16 21:04:54 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "99_openshift-cluster-api_worker-user-data-secret.yaml" secrets.v1./worker-user-data-managed -n openshift-machine-api as it already exists Jan 16 21:04:54 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "99_openshift-machineconfig_99-master-ssh.yaml" machineconfigs.v1.machineconfiguration.openshift.io/99-master-ssh -n as it already exists Jan 16 21:04:55 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "99_openshift-machineconfig_99-worker-ssh.yaml" machineconfigs.v1.machineconfiguration.openshift.io/99-worker-ssh -n as it already exists Jan 16 21:04:55 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "apiserver.openshift.io_apirequestcount.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/apirequestcounts.apiserver.openshift.io -n as it already exists Jan 16 21:04:55 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "cco-cloudcredential_v1_credentialsrequest_crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/credentialsrequests.cloudcredential.openshift.io -n as it already exists Jan 16 21:04:55 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "cco-cloudcredential_v1_operator_config_custresdef.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/cloudcredentials.operator.openshift.io -n as it already exists Jan 16 21:04:55 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "cco-namespace.yaml" namespaces.v1./openshift-cloud-credential-operator -n as it already exists Jan 16 21:04:56 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "cco-operator-config.yaml" cloudcredentials.v1.operator.openshift.io/cluster -n as it already exists Jan 16 21:04:56 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "cluster-config.yaml" configmaps.v1./cluster-config-v1 -n kube-system as it already exists Jan 16 21:04:56 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "cluster-dns-02-config.yml" dnses.v1.config.openshift.io/cluster -n as it already exists Jan 16 21:04:56 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "cluster-infrastructure-02-config.yml" infrastructures.v1.config.openshift.io/cluster -n as it already exists Jan 16 21:04:56 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "cluster-ingress-00-custom-resource-definition.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/ingresscontrollers.operator.openshift.io -n as it already exists Jan 16 21:04:57 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "cluster-ingress-00-namespace.yaml" namespaces.v1./openshift-ingress-operator -n as it already exists Jan 16 21:04:57 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "cluster-ingress-02-config.yml" ingresses.v1.config.openshift.io/cluster -n as it already exists Jan 16 21:04:57 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "cluster-network-01-crd.yml" customresourcedefinitions.v1.apiextensions.k8s.io/networks.config.openshift.io -n as it already exists Jan 16 21:04:57 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "cluster-network-02-config.yml" networks.v1.config.openshift.io/cluster -n as it already exists Jan 16 21:04:57 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "cluster-proxy-01-config.yaml" proxies.v1.config.openshift.io/cluster -n as it already exists Jan 16 21:04:58 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "cluster-role-binding-kube-apiserver.yaml" clusterrolebindings.v1.rbac.authorization.k8s.io/kube-apiserver -n as it already exists Jan 16 21:04:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:58.082702 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:04:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:58.092895 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:04:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:58.093574 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:04:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:04:58.093738 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:04:58 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "cluster-role-kube-apiserver.yaml" clusterroles.v1.rbac.authorization.k8s.io/kube-apiserver -n as it already exists Jan 16 21:04:58 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "cluster-scheduler-02-config.yml" schedulers.v1.config.openshift.io/cluster -n as it already exists Jan 16 21:04:58 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "configmap-admin-kubeconfig-client-ca.yaml" configmaps.v1./admin-kubeconfig-client-ca -n openshift-config as it already exists Jan 16 21:04:58 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "configmap-csr-controller-ca.yaml" configmaps.v1./csr-controller-ca -n openshift-config-managed as it already exists Jan 16 21:04:59 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "configmap-kubelet-bootstrap-kubeconfig-ca.yaml" configmaps.v1./kubelet-bootstrap-kubeconfig -n openshift-config-managed as it already exists Jan 16 21:04:59 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "configmap-sa-token-signing-certs.yaml" configmaps.v1./sa-token-signing-certs -n openshift-config-managed as it already exists Jan 16 21:04:59 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "csr-bootstrap-role-binding.yaml" clusterrolebindings.v1.rbac.authorization.k8s.io/system-bootstrap-node-bootstrapper -n as it already exists Jan 16 21:04:59 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "cvo-overrides.yaml" clusterversions.v1.config.openshift.io/version -n openshift-cluster-version as it already exists Jan 16 21:04:59 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "etcd-ca-bundle-configmap.yaml" configmaps.v1./etcd-ca-bundle -n openshift-config as it already exists Jan 16 21:05:00 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "etcd-client-secret.yaml" secrets.v1./etcd-client -n openshift-config as it already exists Jan 16 21:05:00 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "etcd-metric-client-secret.yaml" secrets.v1./etcd-metric-client -n openshift-config as it already exists Jan 16 21:05:00 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "etcd-metric-serving-ca-configmap.yaml" configmaps.v1./etcd-metric-serving-ca -n openshift-config as it already exists Jan 16 21:05:00 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "etcd-metric-signer-secret.yaml" secrets.v1./etcd-metric-signer -n openshift-config as it already exists Jan 16 21:05:00 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "etcd-serving-ca-configmap.yaml" configmaps.v1./etcd-serving-ca -n openshift-config as it already exists Jan 16 21:05:01 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "etcd-signer-secret.yaml" secrets.v1./etcd-signer -n openshift-config as it already exists Jan 16 21:05:01 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "kube-apiserver-serving-ca-configmap.yaml" configmaps.v1./initial-kube-apiserver-server-ca -n openshift-config as it already exists Jan 16 21:05:01 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "kube-cloud-config.yaml" secrets.v1./kube-cloud-cfg -n kube-system as it already exists Jan 16 21:05:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:01.475498 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:05:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:01.486702 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:05:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:01.486869 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:05:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:01.487107 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:05:01 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "kube-system-configmap-root-ca.yaml" configmaps.v1./root-ca -n kube-system as it already exists Jan 16 21:05:01 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "machine-config-server-tls-secret.yaml" secrets.v1./machine-config-server-tls -n openshift-machine-config-operator as it already exists Jan 16 21:05:02 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "openshift-config-secret-pull-secret.yaml" secrets.v1./pull-secret -n openshift-config as it already exists Jan 16 21:05:02 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "openshift-etcd-svc.yaml" services.v1./etcd -n openshift-etcd as it already exists Jan 16 21:05:02 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "openshift-install-manifests.yaml" configmaps.v1./openshift-install-manifests -n openshift-config as it already exists Jan 16 21:05:02 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "openshift-install.yaml" configmaps.v1./openshift-install -n openshift-config as it already exists Jan 16 21:05:02 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "secret-aggregator-client-signer.yaml" secrets.v1./aggregator-client-signer -n openshift-kube-apiserver-operator as it already exists Jan 16 21:05:03 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "secret-bound-sa-token-signing-key.yaml" secrets.v1./next-bound-service-account-signing-key -n openshift-kube-apiserver-operator as it already exists Jan 16 21:05:03 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "secret-control-plane-client-signer.yaml" secrets.v1./kube-control-plane-signer -n openshift-kube-apiserver-operator as it already exists Jan 16 21:05:03 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "secret-csr-signer-signer.yaml" secrets.v1./csr-signer-signer -n openshift-kube-controller-manager-operator as it already exists Jan 16 21:05:03 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "secret-initial-kube-controller-manager-service-account-private-key.yaml" secrets.v1./initial-service-account-private-key -n openshift-config as it already exists Jan 16 21:05:03 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "secret-kube-apiserver-to-kubelet-signer.yaml" secrets.v1./kube-apiserver-to-kubelet-signer -n openshift-kube-apiserver-operator as it already exists Jan 16 21:05:04 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "secret-loadbalancer-serving-signer.yaml" secrets.v1./loadbalancer-serving-signer -n openshift-kube-apiserver-operator as it already exists Jan 16 21:05:04 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "secret-localhost-serving-signer.yaml" secrets.v1./localhost-serving-signer -n openshift-kube-apiserver-operator as it already exists Jan 16 21:05:04 api-int.lab.ocpipi.lan bootkube.sh[15560]: Skipped "secret-service-network-serving-signer.yaml" secrets.v1./service-network-serving-signer -n openshift-kube-apiserver-operator as it already exists Jan 16 21:05:07 api-int.lab.ocpipi.lan approve-csr.sh[15963]: No resources found Jan 16 21:05:08 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:08.174247 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:05:08 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:08.179170 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:05:08 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:08.179706 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:05:08 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:08.179859 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:05:11 api-int.lab.ocpipi.lan systemd[1]: crio-ebef89d4391dc8ba547c26d463e7c42c9984ebc5ca069457fbf4d549313cbca5.scope: Deactivated successfully. Jan 16 21:05:11 api-int.lab.ocpipi.lan systemd[1]: crio-ebef89d4391dc8ba547c26d463e7c42c9984ebc5ca069457fbf4d549313cbca5.scope: Consumed 10min 7.441s CPU time. Jan 16 21:05:11 api-int.lab.ocpipi.lan systemd[1]: crio-conmon-ebef89d4391dc8ba547c26d463e7c42c9984ebc5ca069457fbf4d549313cbca5.scope: Deactivated successfully. Jan 16 21:05:11 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-374b982e1d86973c5dfed6074fb9672d8abaeb9d92f5c53210030ee7b8281959-merged.mount: Deactivated successfully. Jan 16 21:05:11 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:11.357096029Z" level=info msg="Stopped container ebef89d4391dc8ba547c26d463e7c42c9984ebc5ca069457fbf4d549313cbca5: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/kube-apiserver" id=8227e1ef-136c-40ae-99aa-eb389dd76882 name=/runtime.v1.RuntimeService/StopContainer Jan 16 21:05:11 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:11.360352321Z" level=info msg="Stopping pod sandbox: ad4d9c6ed5d6ab8a2a9b57904014b7c602165d8b2f56808bf0a162e61ca5e05d" id=4ae4b45e-fbd6-490a-8a97-10c3b54a5fcf name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:05:11 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:11.372749940Z" level=info msg="Stopped pod sandbox: ad4d9c6ed5d6ab8a2a9b57904014b7c602165d8b2f56808bf0a162e61ca5e05d" id=4ae4b45e-fbd6-490a-8a97-10c3b54a5fcf name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:05:11 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-0c606a572d5977531ade2eef4c17971f94d5a3d5a94711c3bb378294691ad226-merged.mount: Deactivated successfully. Jan 16 21:05:11 api-int.lab.ocpipi.lan systemd[1]: run-netns-bcb60ae4\x2d0b46\x2d413d\x2db0a8\x2de7bbf15ea409.mount: Deactivated successfully. Jan 16 21:05:11 api-int.lab.ocpipi.lan systemd[1]: run-ipcns-bcb60ae4\x2d0b46\x2d413d\x2db0a8\x2de7bbf15ea409.mount: Deactivated successfully. Jan 16 21:05:11 api-int.lab.ocpipi.lan systemd[1]: run-utsns-bcb60ae4\x2d0b46\x2d413d\x2db0a8\x2de7bbf15ea409.mount: Deactivated successfully. Jan 16 21:05:11 api-int.lab.ocpipi.lan systemd[1]: run-containers-storage-overlay\x2dcontainers-ad4d9c6ed5d6ab8a2a9b57904014b7c602165d8b2f56808bf0a162e61ca5e05d-userdata-shm.mount: Deactivated successfully. Jan 16 21:05:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:11.558418 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-ssl-certs-host\") pod \"1cb3be1f2df5273e9b77f7050777bcbe\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " Jan 16 21:05:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:11.558719 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-etc-kubernetes-cloud\") pod \"1cb3be1f2df5273e9b77f7050777bcbe\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " Jan 16 21:05:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:11.558802 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-audit-dir\") pod \"1cb3be1f2df5273e9b77f7050777bcbe\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " Jan 16 21:05:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:11.558863 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-logs\") pod \"1cb3be1f2df5273e9b77f7050777bcbe\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " Jan 16 21:05:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:11.558847 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "1cb3be1f2df5273e9b77f7050777bcbe" (UID: "1cb3be1f2df5273e9b77f7050777bcbe"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:05:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:11.559103 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "1cb3be1f2df5273e9b77f7050777bcbe" (UID: "1cb3be1f2df5273e9b77f7050777bcbe"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:05:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:11.559199 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-logs" (OuterVolumeSpecName: "logs") pod "1cb3be1f2df5273e9b77f7050777bcbe" (UID: "1cb3be1f2df5273e9b77f7050777bcbe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:05:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:11.559211 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "1cb3be1f2df5273e9b77f7050777bcbe" (UID: "1cb3be1f2df5273e9b77f7050777bcbe"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:05:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:11.559418 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-secrets" (OuterVolumeSpecName: "secrets") pod "1cb3be1f2df5273e9b77f7050777bcbe" (UID: "1cb3be1f2df5273e9b77f7050777bcbe"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:05:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:11.559502 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-secrets\") pod \"1cb3be1f2df5273e9b77f7050777bcbe\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " Jan 16 21:05:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:11.559881 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-config\") pod \"1cb3be1f2df5273e9b77f7050777bcbe\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " Jan 16 21:05:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:11.560169 2579 reconciler_common.go:300] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-ssl-certs-host\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:05:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:11.560224 2579 reconciler_common.go:300] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-etc-kubernetes-cloud\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:05:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:11.560407 2579 reconciler_common.go:300] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-audit-dir\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:05:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:11.560476 2579 reconciler_common.go:300] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-logs\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:05:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:11.560700 2579 reconciler_common.go:300] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-secrets\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:05:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:11.560470 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-config" (OuterVolumeSpecName: "config") pod "1cb3be1f2df5273e9b77f7050777bcbe" (UID: "1cb3be1f2df5273e9b77f7050777bcbe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:05:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:11.661249 2579 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-config\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.021894 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" event=&{ID:1cb3be1f2df5273e9b77f7050777bcbe Type:ContainerDied Data:ebef89d4391dc8ba547c26d463e7c42c9984ebc5ca069457fbf4d549313cbca5} Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.022462 2579 scope.go:115] "RemoveContainer" containerID="3d00b24ede439b8dfa7eb78e218c327ae1bbe9f96719ea8096087e7a0a2f3023" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.023384 2579 generic.go:334] "Generic (PLEG): container finished" podID=1cb3be1f2df5273e9b77f7050777bcbe containerID="ebef89d4391dc8ba547c26d463e7c42c9984ebc5ca069457fbf4d549313cbca5" exitCode=0 Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.023582 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" event=&{ID:1cb3be1f2df5273e9b77f7050777bcbe Type:ContainerDied Data:ad4d9c6ed5d6ab8a2a9b57904014b7c602165d8b2f56808bf0a162e61ca5e05d} Jan 16 21:05:12 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:12.033778275Z" level=info msg="Removing container: 3d00b24ede439b8dfa7eb78e218c327ae1bbe9f96719ea8096087e7a0a2f3023" id=a83054da-fff7-4237-9fab-deeb5361896d name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:05:12 api-int.lab.ocpipi.lan systemd[1]: Removed slice libcontainer container kubepods-burstable-pod1cb3be1f2df5273e9b77f7050777bcbe.slice. Jan 16 21:05:12 api-int.lab.ocpipi.lan systemd[1]: kubepods-burstable-pod1cb3be1f2df5273e9b77f7050777bcbe.slice: Consumed 10min 8.353s CPU time. Jan 16 21:05:12 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:12.211850489Z" level=info msg="Removed container 3d00b24ede439b8dfa7eb78e218c327ae1bbe9f96719ea8096087e7a0a2f3023: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/kube-apiserver-insecure-readyz" id=a83054da-fff7-4237-9fab-deeb5361896d name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.213345 2579 scope.go:115] "RemoveContainer" containerID="ebef89d4391dc8ba547c26d463e7c42c9984ebc5ca069457fbf4d549313cbca5" Jan 16 21:05:12 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:12.219454857Z" level=info msg="Removing container: ebef89d4391dc8ba547c26d463e7c42c9984ebc5ca069457fbf4d549313cbca5" id=737d231a-85ac-4dfc-8eb1-7b94e0138de9 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:05:12 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:12.305439360Z" level=info msg="Removed container ebef89d4391dc8ba547c26d463e7c42c9984ebc5ca069457fbf4d549313cbca5: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/kube-apiserver" id=737d231a-85ac-4dfc-8eb1-7b94e0138de9 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.309454 2579 scope.go:115] "RemoveContainer" containerID="c8d5e0778043f084685e3eb73a6e1fe79360a9f6d121776ebfa277ff2971c243" Jan 16 21:05:12 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:12.319513214Z" level=info msg="Removing container: c8d5e0778043f084685e3eb73a6e1fe79360a9f6d121776ebfa277ff2971c243" id=405e2ed5-c312-4925-ab4b-02482f64df6f name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:05:12 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-9e5da273a990d34f278ef3f7b3f6d8cd8ed111599bd1804013472e467c43ba45-merged.mount: Deactivated successfully. Jan 16 21:05:12 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:12.455782141Z" level=info msg="Removed container c8d5e0778043f084685e3eb73a6e1fe79360a9f6d121776ebfa277ff2971c243: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/setup" id=405e2ed5-c312-4925-ab4b-02482f64df6f name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.457516 2579 scope.go:115] "RemoveContainer" containerID="3d00b24ede439b8dfa7eb78e218c327ae1bbe9f96719ea8096087e7a0a2f3023" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:05:12.460871 2579 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d00b24ede439b8dfa7eb78e218c327ae1bbe9f96719ea8096087e7a0a2f3023\": container with ID starting with 3d00b24ede439b8dfa7eb78e218c327ae1bbe9f96719ea8096087e7a0a2f3023 not found: ID does not exist" containerID="3d00b24ede439b8dfa7eb78e218c327ae1bbe9f96719ea8096087e7a0a2f3023" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.461277 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:3d00b24ede439b8dfa7eb78e218c327ae1bbe9f96719ea8096087e7a0a2f3023} err="failed to get container status \"3d00b24ede439b8dfa7eb78e218c327ae1bbe9f96719ea8096087e7a0a2f3023\": rpc error: code = NotFound desc = could not find container \"3d00b24ede439b8dfa7eb78e218c327ae1bbe9f96719ea8096087e7a0a2f3023\": container with ID starting with 3d00b24ede439b8dfa7eb78e218c327ae1bbe9f96719ea8096087e7a0a2f3023 not found: ID does not exist" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.461341 2579 scope.go:115] "RemoveContainer" containerID="ebef89d4391dc8ba547c26d463e7c42c9984ebc5ca069457fbf4d549313cbca5" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:05:12.463321 2579 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebef89d4391dc8ba547c26d463e7c42c9984ebc5ca069457fbf4d549313cbca5\": container with ID starting with ebef89d4391dc8ba547c26d463e7c42c9984ebc5ca069457fbf4d549313cbca5 not found: ID does not exist" containerID="ebef89d4391dc8ba547c26d463e7c42c9984ebc5ca069457fbf4d549313cbca5" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.463431 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:ebef89d4391dc8ba547c26d463e7c42c9984ebc5ca069457fbf4d549313cbca5} err="failed to get container status \"ebef89d4391dc8ba547c26d463e7c42c9984ebc5ca069457fbf4d549313cbca5\": rpc error: code = NotFound desc = could not find container \"ebef89d4391dc8ba547c26d463e7c42c9984ebc5ca069457fbf4d549313cbca5\": container with ID starting with ebef89d4391dc8ba547c26d463e7c42c9984ebc5ca069457fbf4d549313cbca5 not found: ID does not exist" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.463476 2579 scope.go:115] "RemoveContainer" containerID="c8d5e0778043f084685e3eb73a6e1fe79360a9f6d121776ebfa277ff2971c243" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:05:12.466238 2579 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8d5e0778043f084685e3eb73a6e1fe79360a9f6d121776ebfa277ff2971c243\": container with ID starting with c8d5e0778043f084685e3eb73a6e1fe79360a9f6d121776ebfa277ff2971c243 not found: ID does not exist" containerID="c8d5e0778043f084685e3eb73a6e1fe79360a9f6d121776ebfa277ff2971c243" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.466452 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:c8d5e0778043f084685e3eb73a6e1fe79360a9f6d121776ebfa277ff2971c243} err="failed to get container status \"c8d5e0778043f084685e3eb73a6e1fe79360a9f6d121776ebfa277ff2971c243\": rpc error: code = NotFound desc = could not find container \"c8d5e0778043f084685e3eb73a6e1fe79360a9f6d121776ebfa277ff2971c243\": container with ID starting with c8d5e0778043f084685e3eb73a6e1fe79360a9f6d121776ebfa277ff2971c243 not found: ID does not exist" Jan 16 21:05:12 api-int.lab.ocpipi.lan systemd[1]: Created slice libcontainer container kubepods-burstable-pod1cb3be1f2df5273e9b77f7050777bcbe.slice. Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.574733 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.581573 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.582460 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.583222 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.679128 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-ssl-certs-host\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.680142 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-logs\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.680729 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-secrets\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.687154 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.688306 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-audit-dir\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.689261 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-config\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.791757 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-config\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.792808 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-config\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.793215 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-ssl-certs-host\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.793317 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-logs\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.793449 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.793551 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-secrets\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.793752 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-ssl-certs-host\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.793765 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-audit-dir\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.794154 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.794179 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-secrets\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.794201 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-audit-dir\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.794300 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-logs\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.885771 2579 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.886280 2579 kubelet.go:2529] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.887419 2579 kubelet.go:2529] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:05:12 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:12.888544225Z" level=info msg="Stopping pod sandbox: ad4d9c6ed5d6ab8a2a9b57904014b7c602165d8b2f56808bf0a162e61ca5e05d" id=5b8b6092-b5b5-4e06-94d9-760d9fea68fc name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:05:12 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:12.889429255Z" level=info msg="Stopped pod sandbox (already stopped): ad4d9c6ed5d6ab8a2a9b57904014b7c602165d8b2f56808bf0a162e61ca5e05d" id=5b8b6092-b5b5-4e06-94d9-760d9fea68fc name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:12.890837 2579 scope.go:115] "RemoveContainer" containerID="c8d5e0778043f084685e3eb73a6e1fe79360a9f6d121776ebfa277ff2971c243" Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:05:12.894762 2579 kuberuntime_container.go:842] failed to remove pod init container "setup": failed to get container status "c8d5e0778043f084685e3eb73a6e1fe79360a9f6d121776ebfa277ff2971c243": rpc error: code = NotFound desc = could not find container "c8d5e0778043f084685e3eb73a6e1fe79360a9f6d121776ebfa277ff2971c243": container with ID starting with c8d5e0778043f084685e3eb73a6e1fe79360a9f6d121776ebfa277ff2971c243 not found: ID does not exist; Skipping pod "bootstrap-kube-apiserver-localhost.localdomain_openshift-kube-apiserver(1cb3be1f2df5273e9b77f7050777bcbe)" Jan 16 21:05:12 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:12.897355987Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/POD" id=0d086202-8ffd-41bb-a781-5fca2daec6be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 21:05:12 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:12.897852838Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:05:12 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:12.968706689Z" level=info msg="Ran pod sandbox c6fca5c97022384178f10593b3c69027ffc4f49d245087f693d7ec56d9af4cc6 with infra container: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/POD" id=0d086202-8ffd-41bb-a781-5fca2daec6be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 21:05:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: W0116 21:05:12.966864 2579 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cb3be1f2df5273e9b77f7050777bcbe.slice/crio-c6fca5c97022384178f10593b3c69027ffc4f49d245087f693d7ec56d9af4cc6 WatchSource:0}: Error finding container c6fca5c97022384178f10593b3c69027ffc4f49d245087f693d7ec56d9af4cc6: Status 404 returned error can't find the container with id c6fca5c97022384178f10593b3c69027ffc4f49d245087f693d7ec56d9af4cc6 Jan 16 21:05:12 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:12.975258696Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=f719c971-9dbc-4618-955b-c0167683d1a3 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:05:12 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:12.976308380Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=f719c971-9dbc-4618-955b-c0167683d1a3 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:05:12 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:12.979418810Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=bab357f9-3740-4913-b4f5-4f39a834164b name=/runtime.v1.ImageService/ImageStatus Jan 16 21:05:12 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:12.980303521Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=bab357f9-3740-4913-b4f5-4f39a834164b name=/runtime.v1.ImageService/ImageStatus Jan 16 21:05:12 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:12.988564000Z" level=info msg="Creating container: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/setup" id=72c7742e-f1b7-4fda-bc09-31b370a652f7 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:05:12 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:12.989410587Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:05:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:13.061298 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" event=&{ID:1cb3be1f2df5273e9b77f7050777bcbe Type:ContainerStarted Data:c6fca5c97022384178f10593b3c69027ffc4f49d245087f693d7ec56d9af4cc6} Jan 16 21:05:13 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:05:13 api-int.lab.ocpipi.lan systemd[1]: Started crio-conmon-88e9088f133f5391f3e9167ca5757042fb9276f23dff38dc04b643fdf15c5372.scope. Jan 16 21:05:13 api-int.lab.ocpipi.lan systemd[1]: run-runc-88e9088f133f5391f3e9167ca5757042fb9276f23dff38dc04b643fdf15c5372-runc.covXos.mount: Deactivated successfully. Jan 16 21:05:13 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 88e9088f133f5391f3e9167ca5757042fb9276f23dff38dc04b643fdf15c5372. Jan 16 21:05:13 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:13.954772279Z" level=info msg="Created container 88e9088f133f5391f3e9167ca5757042fb9276f23dff38dc04b643fdf15c5372: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/setup" id=72c7742e-f1b7-4fda-bc09-31b370a652f7 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:05:13 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:13.959362648Z" level=info msg="Starting container: 88e9088f133f5391f3e9167ca5757042fb9276f23dff38dc04b643fdf15c5372" id=cf0c3359-9279-4595-9e1d-be80add6c589 name=/runtime.v1.RuntimeService/StartContainer Jan 16 21:05:14 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:14.003861777Z" level=info msg="Started container" PID=16064 containerID=88e9088f133f5391f3e9167ca5757042fb9276f23dff38dc04b643fdf15c5372 description=openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/setup id=cf0c3359-9279-4595-9e1d-be80add6c589 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c6fca5c97022384178f10593b3c69027ffc4f49d245087f693d7ec56d9af4cc6 Jan 16 21:05:14 api-int.lab.ocpipi.lan systemd[1]: crio-88e9088f133f5391f3e9167ca5757042fb9276f23dff38dc04b643fdf15c5372.scope: Deactivated successfully. Jan 16 21:05:14 api-int.lab.ocpipi.lan systemd[1]: crio-conmon-88e9088f133f5391f3e9167ca5757042fb9276f23dff38dc04b643fdf15c5372.scope: Deactivated successfully. Jan 16 21:05:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:15.099762 2579 generic.go:334] "Generic (PLEG): container finished" podID=1cb3be1f2df5273e9b77f7050777bcbe containerID="88e9088f133f5391f3e9167ca5757042fb9276f23dff38dc04b643fdf15c5372" exitCode=0 Jan 16 21:05:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:15.100167 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" event=&{ID:1cb3be1f2df5273e9b77f7050777bcbe Type:ContainerDied Data:88e9088f133f5391f3e9167ca5757042fb9276f23dff38dc04b643fdf15c5372} Jan 16 21:05:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:15.101318 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:05:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:15.108413 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:05:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:15.108741 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:05:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:15.108801 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:05:15 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:15.111360752Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=c381d16e-fdb3-40e5-b8ef-0e3c79c8d00d name=/runtime.v1.ImageService/ImageStatus Jan 16 21:05:15 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:15.112511238Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=c381d16e-fdb3-40e5-b8ef-0e3c79c8d00d name=/runtime.v1.ImageService/ImageStatus Jan 16 21:05:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:15.115521 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:05:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:15.123308 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:05:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:15.123495 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:05:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:15.123550 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:05:15 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:15.126732333Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=c01a3e56-c3e1-461e-a192-44e837a3070e name=/runtime.v1.ImageService/ImageStatus Jan 16 21:05:15 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:15.128470482Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=c01a3e56-c3e1-461e-a192-44e837a3070e name=/runtime.v1.ImageService/ImageStatus Jan 16 21:05:15 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:15.136308074Z" level=info msg="Creating container: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/kube-apiserver" id=82771e7f-00c6-4d84-8d8d-f0622f35bfb5 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:05:15 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:15.137482358Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:05:15 api-int.lab.ocpipi.lan systemd[1]: Started crio-conmon-d2372fe55957bd2b5d77cdbd933ab77b9ab3973bd9ccdcd935e102a58360d1e6.scope. Jan 16 21:05:15 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container d2372fe55957bd2b5d77cdbd933ab77b9ab3973bd9ccdcd935e102a58360d1e6. Jan 16 21:05:15 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:15.978591599Z" level=info msg="Created container d2372fe55957bd2b5d77cdbd933ab77b9ab3973bd9ccdcd935e102a58360d1e6: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/kube-apiserver" id=82771e7f-00c6-4d84-8d8d-f0622f35bfb5 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:05:15 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:15.982756749Z" level=info msg="Starting container: d2372fe55957bd2b5d77cdbd933ab77b9ab3973bd9ccdcd935e102a58360d1e6" id=05166e6c-76cd-4a12-b046-1c6b060f4007 name=/runtime.v1.RuntimeService/StartContainer Jan 16 21:05:16 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:16.027167655Z" level=info msg="Started container" PID=16110 containerID=d2372fe55957bd2b5d77cdbd933ab77b9ab3973bd9ccdcd935e102a58360d1e6 description=openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/kube-apiserver id=05166e6c-76cd-4a12-b046-1c6b060f4007 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c6fca5c97022384178f10593b3c69027ffc4f49d245087f693d7ec56d9af4cc6 Jan 16 21:05:16 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:16.079385014Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c074b99f606a6eba6b937f3d96115ec5790b747f6c0b6f6eed01e4f1a3a189eb" id=b7da1a81-94e5-48e1-9cff-03280575152d name=/runtime.v1.ImageService/ImageStatus Jan 16 21:05:16 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:16.080769628Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba904bf53d6c9cd58209eebeead820a9fc257a3eef7e2301313cd33072c494dd,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c074b99f606a6eba6b937f3d96115ec5790b747f6c0b6f6eed01e4f1a3a189eb],Size_:546075839,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=b7da1a81-94e5-48e1-9cff-03280575152d name=/runtime.v1.ImageService/ImageStatus Jan 16 21:05:16 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:16.083484412Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c074b99f606a6eba6b937f3d96115ec5790b747f6c0b6f6eed01e4f1a3a189eb" id=114a7771-856a-45ee-a17a-2f5e0fc1b258 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:05:16 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:16.084887622Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba904bf53d6c9cd58209eebeead820a9fc257a3eef7e2301313cd33072c494dd,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c074b99f606a6eba6b937f3d96115ec5790b747f6c0b6f6eed01e4f1a3a189eb],Size_:546075839,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=114a7771-856a-45ee-a17a-2f5e0fc1b258 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:05:16 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:16.088393809Z" level=info msg="Creating container: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/kube-apiserver-insecure-readyz" id=a9c13f87-d80b-4eef-993e-14f90b11ba5e name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:05:16 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:16.089339456Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:05:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:16.119870 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" event=&{ID:1cb3be1f2df5273e9b77f7050777bcbe Type:ContainerStarted Data:d2372fe55957bd2b5d77cdbd933ab77b9ab3973bd9ccdcd935e102a58360d1e6} Jan 16 21:05:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:16.469276 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:05:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:16.480816 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:05:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:16.481169 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:05:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:16.481228 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:05:16 api-int.lab.ocpipi.lan systemd[1]: Started crio-conmon-832bc24a6eaa010384b99939a6b7ea8f63015c7a33f91f7a705041aac859cca0.scope. Jan 16 21:05:16 api-int.lab.ocpipi.lan systemd[1]: run-runc-832bc24a6eaa010384b99939a6b7ea8f63015c7a33f91f7a705041aac859cca0-runc.ynwUAH.mount: Deactivated successfully. Jan 16 21:05:16 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 832bc24a6eaa010384b99939a6b7ea8f63015c7a33f91f7a705041aac859cca0. Jan 16 21:05:17 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:17.018806429Z" level=info msg="Created container 832bc24a6eaa010384b99939a6b7ea8f63015c7a33f91f7a705041aac859cca0: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/kube-apiserver-insecure-readyz" id=a9c13f87-d80b-4eef-993e-14f90b11ba5e name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:05:17 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:17.022236719Z" level=info msg="Starting container: 832bc24a6eaa010384b99939a6b7ea8f63015c7a33f91f7a705041aac859cca0" id=5aff560a-2c18-4367-bd24-bea3ae93969a name=/runtime.v1.RuntimeService/StartContainer Jan 16 21:05:17 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:17.059288174Z" level=info msg="Started container" PID=16159 containerID=832bc24a6eaa010384b99939a6b7ea8f63015c7a33f91f7a705041aac859cca0 description=openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/kube-apiserver-insecure-readyz id=5aff560a-2c18-4367-bd24-bea3ae93969a name=/runtime.v1.RuntimeService/StartContainer sandboxID=c6fca5c97022384178f10593b3c69027ffc4f49d245087f693d7ec56d9af4cc6 Jan 16 21:05:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:17.145720 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" event=&{ID:1cb3be1f2df5273e9b77f7050777bcbe Type:ContainerStarted Data:832bc24a6eaa010384b99939a6b7ea8f63015c7a33f91f7a705041aac859cca0} Jan 16 21:05:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:17.146523 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:05:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:17.158702 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:05:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:17.158843 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:05:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:17.158892 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:05:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:18.153033 2579 kubelet.go:2529] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:05:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:18.154168 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:05:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:18.169090 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:05:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:18.169295 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:05:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:18.169357 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:05:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:18.219591 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:05:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:18.224443 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:05:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:18.226252 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:05:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:18.226721 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:05:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:19.163584 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:05:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:19.196751 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:05:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:19.197335 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:05:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:19.197542 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:05:23 api-int.lab.ocpipi.lan conmon[15640]: conmon fa576909424de31254e1 : container 15664 exited with status 1 Jan 16 21:05:23 api-int.lab.ocpipi.lan systemd[1]: crio-fa576909424de31254e1c4275c6fb0976b920ec554d33aefeb4a8e3f46464ffa.scope: Deactivated successfully. Jan 16 21:05:23 api-int.lab.ocpipi.lan systemd[1]: crio-fa576909424de31254e1c4275c6fb0976b920ec554d33aefeb4a8e3f46464ffa.scope: Consumed 8.323s CPU time. Jan 16 21:05:23 api-int.lab.ocpipi.lan systemd[1]: crio-conmon-fa576909424de31254e1c4275c6fb0976b920ec554d33aefeb4a8e3f46464ffa.scope: Deactivated successfully. Jan 16 21:05:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:23.206177 2579 generic.go:334] "Generic (PLEG): container finished" podID=c3db590e56a311b869092b2d6b1724e5 containerID="fa576909424de31254e1c4275c6fb0976b920ec554d33aefeb4a8e3f46464ffa" exitCode=1 Jan 16 21:05:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:23.207512 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" event=&{ID:c3db590e56a311b869092b2d6b1724e5 Type:ContainerDied Data:fa576909424de31254e1c4275c6fb0976b920ec554d33aefeb4a8e3f46464ffa} Jan 16 21:05:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:23.208380 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:05:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:23.211359 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:05:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:23.211456 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:05:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:23.211484 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:05:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:23.211720 2579 scope.go:115] "RemoveContainer" containerID="fa576909424de31254e1c4275c6fb0976b920ec554d33aefeb4a8e3f46464ffa" Jan 16 21:05:23 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:23.214778458Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=e25fcb86-4ef5-4829-bb15-2b78240991f2 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:05:23 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:23.216254662Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=e25fcb86-4ef5-4829-bb15-2b78240991f2 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:05:23 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:23.217563341Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=968ab0f6-546a-4e3a-b592-6480c5561789 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:05:23 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:23.218183037Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=968ab0f6-546a-4e3a-b592-6480c5561789 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:05:23 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:23.220220928Z" level=info msg="Creating container: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/kube-controller-manager" id=4071dd38-7aa2-4a95-b28e-77bbc0f6ba64 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:05:23 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:23.220869870Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:05:23 api-int.lab.ocpipi.lan systemd[1]: Started crio-conmon-5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6.scope. Jan 16 21:05:23 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6. Jan 16 21:05:23 api-int.lab.ocpipi.lan conmon[15671]: conmon 5caf0d427b79aad6bc0b : container 15684 exited with status 1 Jan 16 21:05:23 api-int.lab.ocpipi.lan systemd[1]: crio-5caf0d427b79aad6bc0b06abe3c0667fd38eb99b83c2fa58f5e60ffc61d0dbe4.scope: Deactivated successfully. Jan 16 21:05:23 api-int.lab.ocpipi.lan systemd[1]: crio-5caf0d427b79aad6bc0b06abe3c0667fd38eb99b83c2fa58f5e60ffc61d0dbe4.scope: Consumed 2.887s CPU time. Jan 16 21:05:23 api-int.lab.ocpipi.lan systemd[1]: crio-conmon-5caf0d427b79aad6bc0b06abe3c0667fd38eb99b83c2fa58f5e60ffc61d0dbe4.scope: Deactivated successfully. Jan 16 21:05:23 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:23.867866992Z" level=info msg="Created container 5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/kube-controller-manager" id=4071dd38-7aa2-4a95-b28e-77bbc0f6ba64 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:05:23 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:23.870255992Z" level=info msg="Starting container: 5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6" id=08b852ce-a8a5-4d9b-aa7a-ec0a8e3308b7 name=/runtime.v1.RuntimeService/StartContainer Jan 16 21:05:23 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:23.903875733Z" level=info msg="Started container" PID=16244 containerID=5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6 description=kube-system/bootstrap-kube-controller-manager-localhost.localdomain/kube-controller-manager id=08b852ce-a8a5-4d9b-aa7a-ec0a8e3308b7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=df7414d7cd6ee869f535f4228fa7b7b23f6ac8632001d1dce75dcc25250e3f1b Jan 16 21:05:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:24.216875 2579 generic.go:334] "Generic (PLEG): container finished" podID=b8b0f2012ce2b145220be181d7a5aa55 containerID="5caf0d427b79aad6bc0b06abe3c0667fd38eb99b83c2fa58f5e60ffc61d0dbe4" exitCode=1 Jan 16 21:05:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:24.218102 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" event=&{ID:b8b0f2012ce2b145220be181d7a5aa55 Type:ContainerDied Data:5caf0d427b79aad6bc0b06abe3c0667fd38eb99b83c2fa58f5e60ffc61d0dbe4} Jan 16 21:05:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:24.218444 2579 scope.go:115] "RemoveContainer" containerID="0a595a7350da388b8c61b7e704112d1c886edf09068e421c56d19d38e17f400f" Jan 16 21:05:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:24.219236 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:05:24 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:24.221732388Z" level=info msg="Removing container: 0a595a7350da388b8c61b7e704112d1c886edf09068e421c56d19d38e17f400f" id=ea03f0cb-af5e-4814-b877-0f70297670c0 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:05:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:24.227253 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:05:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:24.227414 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:05:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:24.227447 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:05:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:24.227663 2579 scope.go:115] "RemoveContainer" containerID="5caf0d427b79aad6bc0b06abe3c0667fd38eb99b83c2fa58f5e60ffc61d0dbe4" Jan 16 21:05:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:05:24.228209 2579 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=bootstrap-kube-scheduler-localhost.localdomain_kube-system(b8b0f2012ce2b145220be181d7a5aa55)\"" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" podUID=b8b0f2012ce2b145220be181d7a5aa55 Jan 16 21:05:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:24.230529 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" event=&{ID:c3db590e56a311b869092b2d6b1724e5 Type:ContainerStarted Data:5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6} Jan 16 21:05:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:24.231247 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:05:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:24.257786 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:05:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:24.259031 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:05:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:24.264840 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:05:24 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:24.286030441Z" level=info msg="Removed container 0a595a7350da388b8c61b7e704112d1c886edf09068e421c56d19d38e17f400f: kube-system/bootstrap-kube-scheduler-localhost.localdomain/kube-scheduler" id=ea03f0cb-af5e-4814-b877-0f70297670c0 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:05:28 api-int.lab.ocpipi.lan approve-csr.sh[16298]: No resources found Jan 16 21:05:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:28.268370 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:05:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:28.272259 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:05:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:28.272554 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:05:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:28.272745 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:05:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:28.701122 2579 kubelet.go:2529] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:05:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:28.701285 2579 kubelet.go:2529] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:05:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:28.701810 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:05:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:28.704662 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:05:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:28.704810 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:05:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:28.704840 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:05:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:30.405271 2579 kubelet.go:2529] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:05:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:30.405873 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:05:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:30.409030 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:05:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:30.409121 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:05:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:30.409149 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:05:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:32.934255 2579 kubelet.go:2529] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:05:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:32.936460 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:05:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:32.942790 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:05:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:32.943115 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:05:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:32.943175 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:05:33 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:05:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:37.466363 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:05:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:37.467102 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:05:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:37.474426 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:05:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:37.475829 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:05:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:37.476379 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:05:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:37.477231 2579 scope.go:115] "RemoveContainer" containerID="5caf0d427b79aad6bc0b06abe3c0667fd38eb99b83c2fa58f5e60ffc61d0dbe4" Jan 16 21:05:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:37.481722 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:05:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:37.481900 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:05:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:37.482130 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:05:37 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:37.484177237Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=9a807fe9-b683-405c-b2ed-70de79aabb6a name=/runtime.v1.ImageService/ImageStatus Jan 16 21:05:37 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:37.485223997Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=9a807fe9-b683-405c-b2ed-70de79aabb6a name=/runtime.v1.ImageService/ImageStatus Jan 16 21:05:37 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:37.491179609Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=a172d934-435b-49b4-9aee-027c4018f0a1 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:05:37 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:37.492300064Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=a172d934-435b-49b4-9aee-027c4018f0a1 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:05:37 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:37.496416811Z" level=info msg="Creating container: kube-system/bootstrap-kube-scheduler-localhost.localdomain/kube-scheduler" id=289e14ec-d881-405e-aeb7-1c5c76aaa0da name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:05:37 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:37.497143928Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:05:38 api-int.lab.ocpipi.lan systemd[1]: Started crio-conmon-b20d4839bb3528e045a42236e133ba6b232c78e13c82c5e2e3696ddcf72ef998.scope. Jan 16 21:05:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:38.308409 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:05:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:38.313710 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:05:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:38.313853 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:05:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:38.313892 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:05:38 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container b20d4839bb3528e045a42236e133ba6b232c78e13c82c5e2e3696ddcf72ef998. Jan 16 21:05:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:38.499461352Z" level=info msg="Created container b20d4839bb3528e045a42236e133ba6b232c78e13c82c5e2e3696ddcf72ef998: kube-system/bootstrap-kube-scheduler-localhost.localdomain/kube-scheduler" id=289e14ec-d881-405e-aeb7-1c5c76aaa0da name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:05:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:38.505782357Z" level=info msg="Starting container: b20d4839bb3528e045a42236e133ba6b232c78e13c82c5e2e3696ddcf72ef998" id=aa98b5d9-90d3-4a52-a024-8fc3035b2ad2 name=/runtime.v1.RuntimeService/StartContainer Jan 16 21:05:38 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:38.564088493Z" level=info msg="Started container" PID=16360 containerID=b20d4839bb3528e045a42236e133ba6b232c78e13c82c5e2e3696ddcf72ef998 description=kube-system/bootstrap-kube-scheduler-localhost.localdomain/kube-scheduler id=aa98b5d9-90d3-4a52-a024-8fc3035b2ad2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=698793765b36d11ab57ecfa5b37f206a0c00f023a38088216f8e7b16931b26a4 Jan 16 21:05:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:38.720107 2579 kubelet.go:2529] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:05:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:38.720828 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:05:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:38.723824 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:05:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:38.724032 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:05:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:38.724069 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:05:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:39.353071 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" event=&{ID:b8b0f2012ce2b145220be181d7a5aa55 Type:ContainerStarted Data:b20d4839bb3528e045a42236e133ba6b232c78e13c82c5e2e3696ddcf72ef998} Jan 16 21:05:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:39.354042 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:05:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:39.356714 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:05:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:39.357512 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:05:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:39.358271 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:05:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:41.467402 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:05:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:41.475839 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:05:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:41.476199 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:05:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:41.476258 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:05:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:43.793506 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 21:05:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:43.794174 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 21:05:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:43.794281 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 21:05:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:43.794341 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 21:05:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:43.794741 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 21:05:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:43.794805 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 21:05:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:43.794870 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 21:05:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:43.795111 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 21:05:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:43.795169 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 21:05:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:44.285530218Z" level=info msg="Stopping pod sandbox: 8ef4b7210274a6b52b1f275b2b88575b44667f9376ae93b8eea1a279639e87b6" id=606aa90a-1da9-4de9-a6ae-b6ee833375a4 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:05:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:44.287716457Z" level=info msg="Stopped pod sandbox (already stopped): 8ef4b7210274a6b52b1f275b2b88575b44667f9376ae93b8eea1a279639e87b6" id=606aa90a-1da9-4de9-a6ae-b6ee833375a4 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:05:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:44.290210234Z" level=info msg="Removing pod sandbox: 8ef4b7210274a6b52b1f275b2b88575b44667f9376ae93b8eea1a279639e87b6" id=b866c3a7-78d4-4f6b-98ed-4c8396541197 name=/runtime.v1.RuntimeService/RemovePodSandbox Jan 16 21:05:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:44.308901423Z" level=info msg="Removed pod sandbox: 8ef4b7210274a6b52b1f275b2b88575b44667f9376ae93b8eea1a279639e87b6" id=b866c3a7-78d4-4f6b-98ed-4c8396541197 name=/runtime.v1.RuntimeService/RemovePodSandbox Jan 16 21:05:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:44.313912602Z" level=info msg="Stopping pod sandbox: ad4d9c6ed5d6ab8a2a9b57904014b7c602165d8b2f56808bf0a162e61ca5e05d" id=e2821d43-b969-4831-997b-10b933d53d2d name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:05:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:44.314513595Z" level=info msg="Stopped pod sandbox (already stopped): ad4d9c6ed5d6ab8a2a9b57904014b7c602165d8b2f56808bf0a162e61ca5e05d" id=e2821d43-b969-4831-997b-10b933d53d2d name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:05:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:44.316527718Z" level=info msg="Removing pod sandbox: ad4d9c6ed5d6ab8a2a9b57904014b7c602165d8b2f56808bf0a162e61ca5e05d" id=c9eda589-0410-44b0-8f3b-aa64fbdf45fb name=/runtime.v1.RuntimeService/RemovePodSandbox Jan 16 21:05:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:05:44.331828149Z" level=info msg="Removed pod sandbox: ad4d9c6ed5d6ab8a2a9b57904014b7c602165d8b2f56808bf0a162e61ca5e05d" id=c9eda589-0410-44b0-8f3b-aa64fbdf45fb name=/runtime.v1.RuntimeService/RemovePodSandbox Jan 16 21:05:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:48.374212 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:05:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:48.379531 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:05:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:48.380130 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:05:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:48.380317 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:05:48 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.BHBgkT.mount: Deactivated successfully. Jan 16 21:05:48 api-int.lab.ocpipi.lan approve-csr.sh[16427]: No resources found Jan 16 21:05:53 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:05:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:58.509698 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:05:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:58.521106 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:05:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:58.521739 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:05:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:05:58.521817 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:05:58 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.F4xWIH.mount: Deactivated successfully. Jan 16 21:06:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:03.466796 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:06:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:03.476357 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:06:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:03.476730 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:06:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:03.476793 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:06:08 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:08.663725 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:06:08 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:08.670914 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:06:08 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:08.671314 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:06:08 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:08.671875 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:06:09 api-int.lab.ocpipi.lan approve-csr.sh[16524]: No resources found Jan 16 21:06:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:11.467534 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:06:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:11.472903 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:06:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:11.473545 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:06:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:11.473612 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:06:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:12.467444 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:06:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:12.473458 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:06:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:12.474582 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:06:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:12.474772 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:06:13 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:06:17 api-int.lab.ocpipi.lan sudo[16551]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman ps Jan 16 21:06:17 api-int.lab.ocpipi.lan sudo[16551]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 21:06:17 api-int.lab.ocpipi.lan sudo[16551]: pam_unix(sudo:session): session closed for user root Jan 16 21:06:18 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.jf3UfK.mount: Deactivated successfully. Jan 16 21:06:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:18.778817 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:06:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:18.792817 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:06:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:18.793322 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:06:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:18.793385 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:06:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:28.880382 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:06:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:28.886480 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:06:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:28.886765 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:06:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:28.886832 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:06:30 api-int.lab.ocpipi.lan approve-csr.sh[16615]: No resources found Jan 16 21:06:30 api-int.lab.ocpipi.lan sudo[16626]: core : TTY=pts/1 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/podman logs 26785709b925 Jan 16 21:06:30 api-int.lab.ocpipi.lan sudo[16626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=1000) Jan 16 21:06:31 api-int.lab.ocpipi.lan sudo[16626]: pam_unix(sudo:session): session closed for user root Jan 16 21:06:33 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:06:38 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.jnpjhi.mount: Deactivated successfully. Jan 16 21:06:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:38.959242 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:06:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:38.967557 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:06:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:38.968450 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:06:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:38.969256 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:06:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:43.796161 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 21:06:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:43.798306 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 21:06:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:43.798554 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 21:06:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:43.798628 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 21:06:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:43.798840 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 21:06:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:43.798890 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 21:06:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:43.799120 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 21:06:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:43.799188 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 21:06:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:43.799235 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 21:06:44 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:44.468396 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:06:44 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:44.476315 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:06:44 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:44.476430 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:06:44 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:44.476487 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:06:48 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.XGsp2f.mount: Deactivated successfully. Jan 16 21:06:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:49.052900 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:06:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:49.059405 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:06:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:49.059590 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:06:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:49.059743 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:06:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:49.467158 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:06:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:49.467832 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:06:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:49.473091 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:06:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:49.474858 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:06:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:49.475256 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:06:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:49.479526 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:06:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:49.479781 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:06:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:49.479847 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:06:51 api-int.lab.ocpipi.lan approve-csr.sh[16710]: No resources found Jan 16 21:06:54 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:06:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:56.467861 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:06:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:56.478224 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:06:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:56.479470 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:06:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:56.480121 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:06:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:59.126330 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:06:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:59.134249 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:06:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:59.134349 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:06:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:06:59.134399 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:07:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:03.467474 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:07:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:03.468243 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:07:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:03.476351 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:07:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:03.476786 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:07:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:03.477307 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:07:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:03.477371 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:07:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:03.477088 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:07:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:03.478544 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:07:08 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.Ws91W3.mount: Deactivated successfully. Jan 16 21:07:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:09.220858 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:07:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:09.228459 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:07:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:09.228763 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:07:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:09.228835 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:07:12 api-int.lab.ocpipi.lan approve-csr.sh[16791]: No resources found Jan 16 21:07:14 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:07:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:16.468863 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:07:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:16.479465 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:07:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:16.479777 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:07:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:16.479840 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:07:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:17.467490 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:07:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:17.473487 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:07:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:17.473576 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:07:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:17.473761 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:07:18 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.3uiKpa.mount: Deactivated successfully. Jan 16 21:07:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:19.297516 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:07:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:19.303330 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:07:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:19.303757 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:07:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:19.303828 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:07:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:29.392311 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:07:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:29.398440 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:07:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:29.399554 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:07:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:29.400345 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:07:33 api-int.lab.ocpipi.lan approve-csr.sh[16876]: No resources found Jan 16 21:07:34 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:07:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:37.469278 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:07:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:37.479834 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:07:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:37.480269 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:07:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:37.480374 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:07:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:39.481512 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:07:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:39.489572 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:07:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:39.489892 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:07:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:39.490176 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:07:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:43.801118 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 21:07:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:43.801479 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 21:07:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:43.801585 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 21:07:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:43.801748 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 21:07:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:43.801872 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 21:07:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:43.802285 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 21:07:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:43.802367 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 21:07:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:43.802416 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 21:07:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:43.802481 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 21:07:43 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:07:43.905094939Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcc1d762ed74e1eb6027355a2e6cc3933bd7b35cee9d6235de0fbe2d2958b0c2" id=4fe8436b-e297-44cf-8ea2-f5a4b664841a name=/runtime.v1.ImageService/ImageStatus Jan 16 21:07:43 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:07:43.907817014Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a5beb712367dd5020b5a7b99c2ffbfcd91d3c6c425625d5cc816f58cf145564f,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcc1d762ed74e1eb6027355a2e6cc3933bd7b35cee9d6235de0fbe2d2958b0c2],Size_:448590957,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=4fe8436b-e297-44cf-8ea2-f5a4b664841a name=/runtime.v1.ImageService/ImageStatus Jan 16 21:07:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:49.574866 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:07:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:49.580559 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:07:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:49.581205 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:07:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:49.581270 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:07:53 api-int.lab.ocpipi.lan approve-csr.sh[16956]: No resources found Jan 16 21:07:54 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:07:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:59.652633 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:07:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:59.662625 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:07:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:59.663163 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:07:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:07:59.663225 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:08:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:01.467132 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:08:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:01.477805 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:08:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:01.478275 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:08:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:01.478347 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:08:08 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.e5S4S0.mount: Deactivated successfully. Jan 16 21:08:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:09.749462 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:08:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:09.765833 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:08:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:09.766260 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:08:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:09.766326 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:08:14 api-int.lab.ocpipi.lan approve-csr.sh[17035]: No resources found Jan 16 21:08:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:14.467454 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:08:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:14.477147 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:08:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:14.477351 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:08:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:14.477406 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:08:14 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:08:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:16.467173 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:08:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:16.474119 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:08:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:16.475102 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:08:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:16.475528 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:08:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:19.846239 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:08:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:19.852611 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:08:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:19.853167 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:08:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:19.853228 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:08:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:23.466463 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:08:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:23.473069 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:08:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:23.473344 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:08:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:23.473408 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:08:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:25.466242 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:08:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:25.472587 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:08:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:25.472820 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:08:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:25.472875 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:08:26 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:26.466473 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:08:26 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:26.467373 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:08:26 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:26.474248 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:08:26 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:26.474524 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:08:26 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:26.475097 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:08:26 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:26.475167 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:08:26 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:26.474810 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:08:26 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:26.476450 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:08:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:29.928453 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:08:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:29.934331 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:08:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:29.934609 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:08:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:29.934777 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:08:34 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:08:35 api-int.lab.ocpipi.lan approve-csr.sh[17112]: No resources found Jan 16 21:08:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:40.020109 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:08:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:40.027616 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:08:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:40.028545 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:08:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:40.029365 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:08:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:43.804207 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 21:08:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:43.804814 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 21:08:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:43.805110 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 21:08:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:43.805177 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 21:08:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:43.805271 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 21:08:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:43.805359 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 21:08:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:43.805468 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 21:08:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:43.805541 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 21:08:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:43.805790 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 21:08:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:45.467323 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:08:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:45.477528 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:08:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:45.477913 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:08:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:45.478174 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:08:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:46.466519 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:08:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:46.473849 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:08:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:46.475213 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:08:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:46.475282 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:08:48 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.JyvPIF.mount: Deactivated successfully. Jan 16 21:08:50 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:50.109291 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:08:50 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:50.121495 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:08:50 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:50.121869 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:08:50 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:08:50.122119 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:08:55 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:08:56 api-int.lab.ocpipi.lan approve-csr.sh[17196]: No resources found Jan 16 21:09:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:00.225182 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:09:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:00.231809 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:09:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:00.232291 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:09:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:00.232349 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:09:10 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:10.317379 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:09:10 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:10.323411 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:09:10 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:10.323814 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:09:10 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:10.323903 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:09:15 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:09:16 api-int.lab.ocpipi.lan approve-csr.sh[17275]: No resources found Jan 16 21:09:18 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.3y5jIp.mount: Deactivated successfully. Jan 16 21:09:20 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:20.402662 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:09:20 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:20.408507 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:09:20 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:20.409183 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:09:20 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:20.409247 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:09:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:25.467557 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:09:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:25.472648 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:09:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:25.473116 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:09:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:25.473177 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:09:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:27.468510 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:09:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:27.475816 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:09:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:27.476464 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:09:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:27.476532 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:09:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:30.466850 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:09:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:30.476386 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:09:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:30.476581 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:09:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:30.476644 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:09:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:30.479905 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:09:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:30.486325 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:09:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:30.486516 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:09:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:30.486574 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:09:35 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:09:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:37.472528 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:09:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:37.490446 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:09:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:37.490573 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:09:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:37.490630 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:09:37 api-int.lab.ocpipi.lan approve-csr.sh[17355]: No resources found Jan 16 21:09:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:40.468277 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:09:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:40.474483 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:09:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:40.475322 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:09:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:40.475389 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:09:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:40.558105 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:09:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:40.571171 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:09:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:40.571264 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:09:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:40.571314 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:09:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:43.808153 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 21:09:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:43.808633 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 21:09:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:43.808848 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 21:09:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:43.809075 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 21:09:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:43.809148 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 21:09:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:43.809259 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 21:09:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:43.809317 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 21:09:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:43.809379 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 21:09:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:43.809480 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 21:09:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:45.467813 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:09:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:45.477170 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:09:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:45.477354 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:09:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:45.477408 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:09:50 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:50.673190 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:09:50 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:50.681214 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:09:50 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:50.681307 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:09:50 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:50.681356 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:09:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:52.467654 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:09:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:52.469889 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:09:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:52.474368 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:09:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:52.474563 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:09:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:52.474618 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:09:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:52.475881 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:09:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:52.476874 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:09:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:52.477103 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:09:55 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:09:58 api-int.lab.ocpipi.lan approve-csr.sh[17437]: No resources found Jan 16 21:09:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:59.466838 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:09:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:59.475577 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:09:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:59.475791 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:09:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:09:59.476323 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:10:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:00.762527 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:10:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:00.773386 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:10:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:00.773784 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:10:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:00.773850 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:10:10 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:10.844600 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:10:10 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:10.852449 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:10:10 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:10.853246 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:10:10 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:10.853315 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:10:15 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:10:19 api-int.lab.ocpipi.lan approve-csr.sh[17519]: No resources found Jan 16 21:10:20 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:20.937482 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:10:20 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:20.944238 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:10:20 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:20.944485 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:10:20 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:20.944597 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:10:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:31.033822 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:10:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:31.039827 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:10:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:31.040905 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:10:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:31.041351 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:10:35 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:10:38 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.m8GGrB.mount: Deactivated successfully. Jan 16 21:10:39 api-int.lab.ocpipi.lan approve-csr.sh[17614]: No resources found Jan 16 21:10:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:40.469283 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:10:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:40.479564 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:10:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:40.480118 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:10:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:40.480305 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:10:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:41.130101 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:10:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:41.136849 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:10:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:41.137114 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:10:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:41.137178 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:10:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:42.467396 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:10:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:42.475258 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:10:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:42.475678 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:10:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:42.475856 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:10:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:43.810182 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 21:10:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:43.810499 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 21:10:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:43.810614 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 21:10:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:43.810807 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 21:10:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:43.810898 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 21:10:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:43.811178 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 21:10:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:43.811259 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 21:10:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:43.811308 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 21:10:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:43.811361 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 21:10:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:45.467884 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:10:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:45.480444 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:10:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:45.480832 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:10:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:45.480897 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:10:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:48.469827 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:10:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:48.480820 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:10:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:48.481552 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:10:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:48.482210 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:10:48 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.0MdAe8.mount: Deactivated successfully. Jan 16 21:10:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:51.211590 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:10:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:51.226381 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:10:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:51.226596 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:10:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:10:51.226654 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:10:55 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:10:58 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.pWFoF4.mount: Deactivated successfully. Jan 16 21:11:00 api-int.lab.ocpipi.lan approve-csr.sh[17694]: No resources found Jan 16 21:11:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:01.321541 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:11:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:01.329838 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:11:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:01.330195 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:11:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:01.330253 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:11:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:03.467351 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:11:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:03.472647 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:11:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:03.472870 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:11:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:03.473125 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:11:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:05.466351 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:11:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:05.473317 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:11:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:05.473836 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:11:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:05.474077 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:11:08 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.N4tWth.mount: Deactivated successfully. Jan 16 21:11:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:11.419342 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:11:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:11.425688 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:11:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:11.426351 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:11:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:11.426416 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:11:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:13.465903 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:11:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:13.470832 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:11:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:13.471237 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:11:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:13.471300 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:11:16 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:11:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:19.466585 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:11:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:19.475388 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:11:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:19.477293 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:11:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:19.478223 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:11:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:21.472501 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:11:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:21.483241 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:11:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:21.483351 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:11:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:21.483404 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:11:21 api-int.lab.ocpipi.lan approve-csr.sh[17772]: No resources found Jan 16 21:11:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:21.512198 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:11:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:21.516810 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:11:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:21.517081 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:11:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:21.517152 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:11:28 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.QT77eA.mount: Deactivated successfully. Jan 16 21:11:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:31.599434 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:11:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:31.606396 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:11:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:31.606574 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:11:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:31.606628 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:11:36 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:11:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:41.737343 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:11:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:41.745609 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:11:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:41.745854 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:11:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:41.746251 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:11:42 api-int.lab.ocpipi.lan approve-csr.sh[17855]: No resources found Jan 16 21:11:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:43.812564 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 21:11:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:43.813648 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 21:11:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:43.813856 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 21:11:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:43.814095 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 21:11:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:43.814185 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 21:11:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:43.814234 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 21:11:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:43.814289 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 21:11:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:43.814340 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 21:11:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:43.814383 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 21:11:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:46.467391 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:11:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:46.473315 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:11:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:46.473425 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:11:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:46.473515 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:11:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:47.467264 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:11:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:47.474695 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:11:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:47.475362 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:11:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:47.475469 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:11:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:51.841248 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:11:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:51.847441 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:11:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:51.847654 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:11:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:51.847813 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:11:56 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:11:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:57.467838 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:11:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:57.473695 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:11:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:57.474423 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:11:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:11:57.474486 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:12:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:01.913893 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:12:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:01.920208 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:12:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:01.920477 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:12:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:01.920538 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:12:03 api-int.lab.ocpipi.lan approve-csr.sh[17935]: No resources found Jan 16 21:12:08 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.PRkyDU.mount: Deactivated successfully. Jan 16 21:12:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:11.997328 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:12:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:12.003637 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:12:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:12.004130 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:12:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:12.004193 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:12:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:16.468235 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:12:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:16.499482 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:12:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:16.499694 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:12:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:16.499862 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:12:16 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:12:18 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.W86lao.mount: Deactivated successfully. Jan 16 21:12:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:21.466671 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:12:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:21.472904 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:12:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:21.473314 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:12:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:21.473374 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:12:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:22.104637 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:12:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:22.111835 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:12:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:22.112195 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:12:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:22.112251 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:12:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:22.467234 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:12:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:22.473559 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:12:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:22.473678 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:12:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:22.473855 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:12:23 api-int.lab.ocpipi.lan approve-csr.sh[18015]: No resources found Jan 16 21:12:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:24.467267 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:12:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:24.476301 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:12:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:24.476521 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:12:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:24.476581 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:12:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:32.179557 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:12:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:32.188457 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:12:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:32.189265 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:12:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:32.189617 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:12:36 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:12:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:37.468520 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:12:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:37.476544 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:12:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:37.476641 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:12:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:37.476690 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:12:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:41.467559 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:12:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:41.474405 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:12:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:41.475206 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:12:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:41.475273 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:12:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:42.318486 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:12:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:42.326872 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:12:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:42.327314 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:12:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:42.327377 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:12:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:43.816188 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 21:12:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:43.816544 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 21:12:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:43.816646 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 21:12:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:43.816705 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 21:12:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:43.816901 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 21:12:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:43.817197 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 21:12:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:43.817272 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 21:12:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:43.817335 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 21:12:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:43.817392 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 21:12:43 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:12:43.928501596Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcc1d762ed74e1eb6027355a2e6cc3933bd7b35cee9d6235de0fbe2d2958b0c2" id=e2bdbccf-9c50-41d9-bb8d-f018654b7088 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:12:43 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:12:43.930459082Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a5beb712367dd5020b5a7b99c2ffbfcd91d3c6c425625d5cc816f58cf145564f,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcc1d762ed74e1eb6027355a2e6cc3933bd7b35cee9d6235de0fbe2d2958b0c2],Size_:448590957,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=e2bdbccf-9c50-41d9-bb8d-f018654b7088 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:12:44 api-int.lab.ocpipi.lan approve-csr.sh[18094]: No resources found Jan 16 21:12:48 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.t7cHe3.mount: Deactivated successfully. Jan 16 21:12:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:52.399159 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:12:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:52.406890 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:12:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:52.407276 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:12:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:12:52.407337 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:12:56 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:13:02 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:02.473662 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:13:02 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:02.483677 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:13:02 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:02.483916 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:13:02 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:02.485359 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:13:05 api-int.lab.ocpipi.lan approve-csr.sh[18180]: No resources found Jan 16 21:13:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:06.467202 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:13:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:06.473152 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:13:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:06.474196 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:13:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:06.474587 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:13:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:09.468229 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:13:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:09.481173 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:13:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:09.481666 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:13:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:09.481841 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:13:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:12.562209 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:13:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:12.570649 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:13:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:12.572622 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:13:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:12.573626 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:13:17 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:13:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:22.468219 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:13:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:22.475583 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:13:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:22.476448 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:13:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:22.477099 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:13:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:22.673264 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:13:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:22.679109 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:13:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:22.679414 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:13:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:22.679471 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:13:26 api-int.lab.ocpipi.lan approve-csr.sh[18262]: No resources found Jan 16 21:13:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:27.467164 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:13:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:27.474285 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:13:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:27.474717 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:13:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:27.474881 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:13:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:32.752301 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:13:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:32.764440 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:13:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:32.766873 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:13:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:32.768160 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:13:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:33.472590 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:13:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:33.483224 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:13:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:33.483504 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:13:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:33.483586 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:13:37 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:13:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:42.846214 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:13:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:42.852173 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:13:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:42.852556 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:13:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:42.852618 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:13:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:43.819543 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 21:13:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:43.820209 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 21:13:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:43.820294 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 21:13:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:43.820341 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 21:13:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:43.820446 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 21:13:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:43.820519 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 21:13:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:43.820580 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 21:13:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:43.820722 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 21:13:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:43.821103 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 21:13:46 api-int.lab.ocpipi.lan approve-csr.sh[18344]: No resources found Jan 16 21:13:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:47.467619 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:13:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:47.468264 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:13:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:47.474908 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:13:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:47.475271 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:13:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:47.475330 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:13:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:47.475453 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:13:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:47.475531 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:13:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:47.475579 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:13:50 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:50.466470 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:13:50 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:50.474524 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:13:50 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:50.475084 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:13:50 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:50.475152 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:13:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:52.947217 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:13:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:52.953464 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:13:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:52.954229 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:13:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:13:52.954879 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:13:57 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:13:58 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.mpBQwW.mount: Deactivated successfully. Jan 16 21:14:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:03.030278 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:14:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:03.037118 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:14:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:03.038434 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:14:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:03.039195 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:14:07 api-int.lab.ocpipi.lan approve-csr.sh[18422]: No resources found Jan 16 21:14:08 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.yOM64M.mount: Deactivated successfully. Jan 16 21:14:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:09.467428 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:14:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:09.473544 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:14:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:09.473731 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:14:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:09.473895 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:14:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:13.124383 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:14:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:13.130364 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:14:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:13.131587 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:14:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:13.132356 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:14:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:14.466531 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:14:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:14.471519 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:14:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:14.471683 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:14:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:14.471737 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:14:17 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:14:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:23.211687 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:14:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:23.220641 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:14:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:23.221270 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:14:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:23.221336 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:14:28 api-int.lab.ocpipi.lan approve-csr.sh[18501]: No resources found Jan 16 21:14:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:30.469276 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:14:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:30.477068 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:14:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:30.477306 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:14:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:30.477376 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:14:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:33.293317 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:14:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:33.301881 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:14:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:33.302491 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:14:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:33.302553 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:14:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:35.467476 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:14:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:35.473570 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:14:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:35.473694 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:14:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:35.474146 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:14:37 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:14:38 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.bj5tgv.mount: Deactivated successfully. Jan 16 21:14:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:39.466520 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:14:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:39.472729 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:14:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:39.473251 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:14:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:39.473313 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:14:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:43.384393 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:14:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:43.390235 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:14:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:43.390443 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:14:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:43.390505 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:14:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:43.823111 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 21:14:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:43.823450 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 21:14:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:43.823903 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 21:14:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:43.824136 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 21:14:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:43.824353 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 21:14:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:43.825133 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 21:14:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:43.825209 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 21:14:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:43.825266 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 21:14:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:43.825309 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 21:14:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:48.468493 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:14:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:48.483620 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:14:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:48.483730 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:14:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:48.484065 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:14:49 api-int.lab.ocpipi.lan approve-csr.sh[18583]: No resources found Jan 16 21:14:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:51.467164 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:14:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:51.472865 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:14:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:51.474298 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:14:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:51.474363 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:14:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:53.456420 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:14:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:53.466584 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:14:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:53.473656 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:14:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:53.474487 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:14:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:53.475329 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:14:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:53.475599 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:14:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:53.475665 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:14:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:14:53.477151 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:14:57 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:15:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:01.466501 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:15:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:01.473868 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:15:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:01.474302 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:15:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:01.474361 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:15:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:03.576217 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:15:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:03.585094 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:15:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:03.586066 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:15:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:03.586138 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:15:09 api-int.lab.ocpipi.lan approve-csr.sh[18678]: No resources found Jan 16 21:15:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:13.675218 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:15:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:13.682893 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:15:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:13.683470 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:15:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:13.683531 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:15:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:15.466375 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:15:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:15.471503 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:15:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:15.471706 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:15:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:15.471871 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:15:17 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:15:18 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.LnY7J7.mount: Deactivated successfully. Jan 16 21:15:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:23.780155 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:15:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:23.786212 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:15:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:23.786402 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:15:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:23.786456 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:15:30 api-int.lab.ocpipi.lan approve-csr.sh[18758]: No resources found Jan 16 21:15:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:33.875215 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:15:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:33.880890 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:15:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:33.881538 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:15:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:33.881601 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:15:38 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:15:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:39.467187 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:15:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:39.474871 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:15:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:39.475287 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:15:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:39.475347 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:15:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:43.467470 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:15:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:43.473468 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:15:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:43.473706 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:15:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:43.473892 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:15:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:43.826269 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 21:15:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:43.826517 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 21:15:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:43.826640 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 21:15:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:43.826713 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 21:15:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:43.826884 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 21:15:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:43.827104 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 21:15:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:43.827188 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 21:15:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:43.827240 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 21:15:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:43.827297 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 21:15:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:43.967334 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:15:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:43.973185 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:15:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:43.973284 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:15:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:43.973339 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:15:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:47.466297 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:15:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:47.479596 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:15:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:47.479913 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:15:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:47.485288 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:15:48 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.VJIIQU.mount: Deactivated successfully. Jan 16 21:15:51 api-int.lab.ocpipi.lan approve-csr.sh[18837]: No resources found Jan 16 21:15:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:54.100749 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:15:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:54.106268 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:15:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:54.106454 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:15:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:54.106511 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:15:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:57.466370 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:15:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:57.471738 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:15:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:57.472200 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:15:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:15:57.472257 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:15:58 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:16:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:04.210294 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:16:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:04.217329 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:16:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:04.218101 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:16:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:04.218498 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:16:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:09.467725 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:16:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:09.474503 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:16:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:09.474737 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:16:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:09.474903 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:16:12 api-int.lab.ocpipi.lan approve-csr.sh[18915]: No resources found Jan 16 21:16:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:14.304744 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:16:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:14.310478 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:16:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:14.311145 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:16:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:14.311719 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:16:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:16.469552 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:16:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:16.474137 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:16:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:16.478647 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:16:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:16.479318 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:16:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:16.479372 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:16:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:16.481103 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:16:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:16.481174 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:16:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:16.481219 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:16:18 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:16:18 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.O0QpMv.mount: Deactivated successfully. Jan 16 21:16:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:24.386191 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:16:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:24.394057 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:16:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:24.394391 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:16:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:24.394457 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:16:26 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:26.468105 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:16:26 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:26.474472 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:16:26 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:26.474679 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:16:26 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:26.474738 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:16:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:31.466482 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:16:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:31.471708 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:16:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:31.472176 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:16:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:31.473219 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:16:33 api-int.lab.ocpipi.lan approve-csr.sh[18994]: No resources found Jan 16 21:16:34 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:34.460309 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:16:34 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:34.467774 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:16:34 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:34.468446 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:16:34 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:34.468512 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:16:38 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:16:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:43.829340 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 21:16:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:43.830394 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 21:16:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:43.830481 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 21:16:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:43.830553 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 21:16:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:43.830607 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 21:16:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:43.830666 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 21:16:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:43.830720 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 21:16:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:43.830911 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 21:16:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:43.831225 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 21:16:44 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:44.550898 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:16:44 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:44.556604 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:16:44 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:44.556719 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:16:44 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:44.556768 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:16:48 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.WAS0mi.mount: Deactivated successfully. Jan 16 21:16:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:49.467389 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:16:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:49.473747 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:16:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:49.474237 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:16:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:49.474296 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:16:54 api-int.lab.ocpipi.lan approve-csr.sh[19072]: No resources found Jan 16 21:16:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:54.637169 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:16:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:54.643767 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:16:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:54.644259 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:16:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:16:54.644369 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:16:58 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:17:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:00.467456 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:17:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:00.478279 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:17:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:00.484174 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:17:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:00.484681 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:17:02 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:02.468426 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:17:02 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:02.498224 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:17:02 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:02.498344 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:17:02 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:02.498401 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:17:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:04.466494 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:17:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:04.473721 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:17:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:04.474199 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:17:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:04.474266 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:17:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:04.729302 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:17:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:04.735292 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:17:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:04.735575 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:17:04 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:04.735646 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:17:08 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.tQ8x5F.mount: Deactivated successfully. Jan 16 21:17:14 api-int.lab.ocpipi.lan approve-csr.sh[19148]: No resources found Jan 16 21:17:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:14.815301 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:17:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:14.821421 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:17:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:14.821662 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:17:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:14.821726 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:17:18 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:17:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:24.885480 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:17:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:24.892329 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:17:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:24.892426 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:17:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:24.892474 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:17:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:28.467659 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:17:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:28.481336 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:17:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:28.481652 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:17:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:28.481755 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:17:28 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.FAC4nw.mount: Deactivated successfully. Jan 16 21:17:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:32.469095 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:17:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:32.469685 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:17:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:32.476709 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:17:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:32.477112 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:17:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:32.477179 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:17:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:32.488134 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:17:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:32.490907 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:17:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:32.491233 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:17:34 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:34.958416 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:17:34 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:34.976646 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:17:34 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:34.977109 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:17:34 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:34.977176 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:17:35 api-int.lab.ocpipi.lan approve-csr.sh[19226]: No resources found Jan 16 21:17:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:36.468581 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:17:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:36.475569 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:17:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:36.476380 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:17:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:36.476751 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:17:39 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:17:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:41.466660 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:17:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:41.474394 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:17:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:41.474579 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:17:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:41.474648 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:17:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:43.831787 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 21:17:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:43.832326 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 21:17:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:43.832428 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 21:17:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:43.832489 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 21:17:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:43.832559 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 21:17:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:43.832610 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 21:17:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:43.832667 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 21:17:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:43.832720 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 21:17:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:43.832763 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 21:17:43 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:17:43.951738722Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcc1d762ed74e1eb6027355a2e6cc3933bd7b35cee9d6235de0fbe2d2958b0c2" id=bb29eb8d-afe6-4d66-a786-d0058c5012f2 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:17:43 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:17:43.954743255Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a5beb712367dd5020b5a7b99c2ffbfcd91d3c6c425625d5cc816f58cf145564f,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcc1d762ed74e1eb6027355a2e6cc3933bd7b35cee9d6235de0fbe2d2958b0c2],Size_:448590957,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=bb29eb8d-afe6-4d66-a786-d0058c5012f2 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:17:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:45.131764 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:17:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:45.138376 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:17:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:45.138607 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:17:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:45.138664 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:17:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:55.210767 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:17:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:55.218529 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:17:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:55.219229 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:17:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:17:55.221753 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:17:56 api-int.lab.ocpipi.lan approve-csr.sh[19307]: No resources found Jan 16 21:17:59 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:18:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:01.467746 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:18:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:01.475665 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:18:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:01.476185 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:18:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:01.476279 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:18:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:05.282614 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:18:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:05.291392 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:18:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:05.291607 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:18:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:05.291716 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:18:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:13.467374 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:18:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:13.473231 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:18:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:13.473421 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:18:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:13.473476 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:18:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:15.379803 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:18:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:15.397595 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:18:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:15.397792 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:18:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:15.400177 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:18:17 api-int.lab.ocpipi.lan approve-csr.sh[19389]: No resources found Jan 16 21:18:19 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:18:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:23.467437 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:18:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:23.474134 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:18:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:23.474337 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:18:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:23.474400 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:18:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:25.466462 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:18:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:25.471726 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:18:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:25.472110 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:18:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:25.472171 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:18:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:25.478661 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:18:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:25.484642 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:18:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:25.485688 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:18:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:25.486426 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:18:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:35.562109 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:18:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:35.567169 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:18:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:35.567593 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:18:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:35.568383 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:18:37 api-int.lab.ocpipi.lan approve-csr.sh[19467]: No resources found Jan 16 21:18:39 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:18:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:43.834284 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 21:18:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:43.834572 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 21:18:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:43.834767 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 21:18:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:43.835104 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 21:18:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:43.835189 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 21:18:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:43.835243 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 21:18:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:43.835310 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 21:18:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:43.835367 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 21:18:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:43.835412 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 21:18:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:45.632912 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:18:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:45.641277 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:18:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:45.642778 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:18:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:45.643362 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:18:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:51.466784 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:18:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:51.472520 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:18:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:51.473216 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:18:51 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:51.473277 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:18:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:52.467058 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:18:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:52.473237 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:18:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:52.474691 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:18:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:52.475355 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:18:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:54.467401 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:18:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:54.473217 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:18:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:54.473427 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:18:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:54.473482 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:18:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:55.705493 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:18:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:55.711083 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:18:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:55.711270 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:18:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:55.711324 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:18:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:56.466637 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:18:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:56.472178 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:18:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:56.481812 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:18:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:18:56.482380 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:18:58 api-int.lab.ocpipi.lan approve-csr.sh[19548]: No resources found Jan 16 21:18:59 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:19:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:05.779432 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:19:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:05.786607 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:19:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:05.787117 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:19:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:05.787193 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:19:10 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:10.465766 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:19:10 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:10.471440 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:19:10 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:10.471625 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:19:10 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:10.471682 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:19:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:15.856691 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:19:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:15.862703 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:19:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:15.863108 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:19:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:15.863313 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:19:18 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.xMH84m.mount: Deactivated successfully. Jan 16 21:19:19 api-int.lab.ocpipi.lan approve-csr.sh[19645]: No resources found Jan 16 21:19:19 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:19:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:23.465779 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:19:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:23.471500 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:19:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:23.471591 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:19:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:23.471639 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:19:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:25.946675 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:19:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:25.952771 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:19:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:25.953243 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:19:25 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:25.953301 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:19:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:33.467144 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:19:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:33.475306 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:19:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:33.475683 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:19:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:33.475785 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:19:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:36.018584 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:19:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:36.026448 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:19:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:36.027373 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:19:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:36.027761 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:19:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:36.467799 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:19:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:36.474268 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:19:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:36.474460 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:19:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:36.474516 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:19:38 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.UU4EZL.mount: Deactivated successfully. Jan 16 21:19:39 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:19:40 api-int.lab.ocpipi.lan approve-csr.sh[19722]: No resources found Jan 16 21:19:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:43.837224 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 21:19:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:43.838193 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 21:19:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:43.838316 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 21:19:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:43.838375 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 21:19:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:43.838450 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 21:19:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:43.838511 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 21:19:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:43.838580 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 21:19:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:43.838633 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 21:19:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:43.838675 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 21:19:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:46.101364 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:19:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:46.115650 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:19:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:46.116697 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:19:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:46.117638 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:19:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:47.468532 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:19:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:47.477597 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:19:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:47.477731 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:19:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:47.477782 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:19:48 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.rvyjd2.mount: Deactivated successfully. Jan 16 21:19:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:56.208216 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:19:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:56.215772 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:19:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:56.216675 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:19:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:19:56.217422 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:20:00 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:20:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:00.468459 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:20:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:00.478312 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:20:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:00.478516 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:20:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:00.478572 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:20:00 api-int.lab.ocpipi.lan approve-csr.sh[19810]: No resources found Jan 16 21:20:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:06.290509 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:20:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:06.299613 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:20:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:06.300349 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:20:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:06.300477 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:20:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:07.467321 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:20:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:07.472492 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:20:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:07.472702 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:20:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:07.472759 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:20:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:09.467452 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:20:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:09.470088 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:20:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:09.473621 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:20:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:09.474384 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:20:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:09.474784 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:20:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:09.475342 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:20:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:09.476488 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:20:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:09.476550 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:20:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:16.400814 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:20:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:16.406684 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:20:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:16.407141 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:20:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:16.407204 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:20:20 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:20:21 api-int.lab.ocpipi.lan approve-csr.sh[19889]: No resources found Jan 16 21:20:26 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:26.514764 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:20:26 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:26.525134 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:20:26 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:26.525675 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:20:26 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:26.525741 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:20:28 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.yb28jZ.mount: Deactivated successfully. Jan 16 21:20:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:35.466419 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:20:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:35.473110 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:20:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:35.473304 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:20:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:35.473371 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:20:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:36.606195 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:20:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:36.612201 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:20:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:36.612380 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:20:36 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:36.612435 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:20:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:38.468614 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:20:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:38.477191 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:20:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:38.477415 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:20:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:38.478485 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:20:40 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:20:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:41.466314 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:20:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:41.471534 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:20:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:41.471725 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:20:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:41.471789 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:20:42 api-int.lab.ocpipi.lan approve-csr.sh[19967]: No resources found Jan 16 21:20:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:42.467200 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:20:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:42.484235 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:20:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:42.484780 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:20:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:42.485333 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:20:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:43.839342 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 21:20:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:43.839642 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 21:20:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:43.839716 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 21:20:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:43.839772 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 21:20:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:43.840111 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 21:20:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:43.840204 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 21:20:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:43.840256 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 21:20:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:43.840311 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 21:20:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:43.840371 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 21:20:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:46.674698 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:20:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:46.680409 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:20:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:46.680777 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:20:46 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:46.681077 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:20:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:49.466669 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:20:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:49.474133 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:20:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:49.474587 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:20:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:49.474647 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:20:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:56.753308 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:20:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:56.758534 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:20:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:56.759772 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:20:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:20:56.760133 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:20:58 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.CKidNG.mount: Deactivated successfully. Jan 16 21:21:00 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:21:03 api-int.lab.ocpipi.lan approve-csr.sh[20048]: No resources found Jan 16 21:21:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:06.834663 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:21:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:06.841732 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:21:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:06.842525 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:21:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:06.843287 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:21:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:13.467476 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:21:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:13.475209 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:21:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:13.475737 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:21:13 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:13.476342 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:21:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:16.468376 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:21:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:16.476653 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:21:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:16.477050 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:21:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:16.477119 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:21:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:16.914684 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:21:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:16.921589 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:21:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:16.921770 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:21:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:16.922163 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:21:20 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:21:23 api-int.lab.ocpipi.lan approve-csr.sh[20129]: No resources found Jan 16 21:21:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:27.018236 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:21:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:27.025487 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:21:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:27.026605 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:21:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:27.027272 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:21:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:28.467241 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:21:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:28.472339 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:21:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:28.472623 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:21:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:28.472692 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:21:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:37.102281 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:21:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:37.108711 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:21:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:37.109367 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:21:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:37.109576 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:21:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:37.466198 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:21:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:37.471509 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:21:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:37.472749 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:21:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:37.473785 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:21:40 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:21:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:41.466513 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:21:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:41.474316 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:21:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:41.474431 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:21:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:41.474481 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:21:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:43.841269 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 21:21:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:43.841428 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 21:21:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:43.841484 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 21:21:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:43.841595 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 21:21:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:43.841666 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 21:21:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:43.841732 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 21:21:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:43.841778 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 21:21:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:43.842105 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 21:21:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:43.842166 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 21:21:44 api-int.lab.ocpipi.lan approve-csr.sh[20209]: No resources found Jan 16 21:21:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:47.184280 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:21:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:47.191274 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:21:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:47.191468 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:21:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:47.191528 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:21:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:52.466724 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:21:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:52.473698 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:21:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:52.474355 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:21:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:52.475212 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:21:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:55.467195 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:21:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:55.473624 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:21:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:55.475142 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:21:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:55.475560 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:21:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:57.257329 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:21:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:57.264384 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:21:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:57.264607 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:21:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:21:57.264667 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:22:00 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:22:02 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:02.469110 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:22:02 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:02.476230 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:22:02 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:02.477065 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:22:02 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:02.478090 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:22:05 api-int.lab.ocpipi.lan approve-csr.sh[20286]: No resources found Jan 16 21:22:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:07.329263 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:22:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:07.336458 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:22:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:07.336540 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:22:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:07.336749 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:22:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:12.467442 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:22:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:12.476438 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:22:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:12.476637 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:22:12 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:12.476697 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:22:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:17.432110 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:22:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:17.439290 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:22:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:17.439522 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:22:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:17.439582 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:22:21 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:22:25 api-int.lab.ocpipi.lan approve-csr.sh[20364]: No resources found Jan 16 21:22:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:27.502725 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:22:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:27.510567 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:22:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:27.510801 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:22:27 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:27.511127 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:22:28 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.0rlNmq.mount: Deactivated successfully. Jan 16 21:22:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:30.468724 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:22:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:30.477627 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:22:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:30.478555 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:22:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:30.479651 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:22:34 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:34.467365 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:22:34 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:34.476076 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:22:34 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:34.476571 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:22:34 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:34.476644 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:22:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:37.581293 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:22:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:37.587303 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:22:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:37.587765 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:22:37 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:37.588468 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:22:41 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:22:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:42.476564 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:22:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:42.488400 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:22:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:42.488665 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:22:42 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:42.488765 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:22:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:43.844290 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 21:22:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:43.844568 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 21:22:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:43.844642 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 21:22:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:43.844685 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 21:22:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:43.844753 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 21:22:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:43.845231 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 21:22:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:43.845314 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 21:22:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:43.845363 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 21:22:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:43.845428 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 21:22:43 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:22:43.971744467Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcc1d762ed74e1eb6027355a2e6cc3933bd7b35cee9d6235de0fbe2d2958b0c2" id=0bec2c66-9664-4f66-ac06-60b793c53c37 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:22:43 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:22:43.973523024Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a5beb712367dd5020b5a7b99c2ffbfcd91d3c6c425625d5cc816f58cf145564f,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcc1d762ed74e1eb6027355a2e6cc3933bd7b35cee9d6235de0fbe2d2958b0c2],Size_:448590957,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=0bec2c66-9664-4f66-ac06-60b793c53c37 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:22:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:45.466528 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:22:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:45.472262 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:22:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:45.472650 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:22:45 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:45.473077 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:22:46 api-int.lab.ocpipi.lan approve-csr.sh[20445]: No resources found Jan 16 21:22:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:47.466628 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:22:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:47.472248 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:22:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:47.473404 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:22:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:47.473473 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:22:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:47.668416 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:22:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:47.675399 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:22:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:47.675662 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:22:47 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:47.675728 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:22:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:57.734099 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:22:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:57.741277 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:22:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:57.741598 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:22:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:22:57.741668 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:23:01 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:23:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:01.470802 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:23:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:01.483737 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:23:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:01.484496 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:23:01 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:01.484558 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:23:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:05.466792 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:23:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:05.491380 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:23:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:05.491625 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:23:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:05.491686 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:23:07 api-int.lab.ocpipi.lan approve-csr.sh[20525]: No resources found Jan 16 21:23:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:07.863303 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:23:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:07.870347 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:23:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:07.871120 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:23:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:07.871198 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:23:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:17.943543 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:23:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:17.952255 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:23:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:17.952720 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:23:17 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:17.952790 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:23:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:18.467708 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:23:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:18.469446 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:23:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:18.475915 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:23:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:18.476634 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:23:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:18.476696 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:23:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:18.502642 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:23:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:18.508367 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:23:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:18.508590 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:23:21 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:23:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:28.042279 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:23:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:28.048311 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:23:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:28.048439 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:23:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:28.048517 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:23:28 api-int.lab.ocpipi.lan approve-csr.sh[20603]: No resources found Jan 16 21:23:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:35.467141 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:23:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:35.472369 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:23:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:35.472637 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:23:35 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:35.472694 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:23:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:38.148398 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:23:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:38.155346 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:23:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:38.155549 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:23:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:38.155604 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:23:41 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:23:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:43.847352 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 21:23:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:43.847668 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 21:23:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:43.847766 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 21:23:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:43.848177 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 21:23:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:43.848293 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 21:23:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:43.848353 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 21:23:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:43.848410 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 21:23:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:43.848454 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 21:23:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:43.848517 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 21:23:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:48.220511 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:23:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:48.229775 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:23:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:48.230193 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:23:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:48.230367 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:23:48 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.15Zovn.mount: Deactivated successfully. Jan 16 21:23:48 api-int.lab.ocpipi.lan approve-csr.sh[20686]: No resources found Jan 16 21:23:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:49.466716 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:23:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:49.468268 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:23:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:49.473355 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:23:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:49.474133 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:23:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:49.474489 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:23:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:49.473622 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:23:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:49.475680 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:23:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:49.476460 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:23:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:58.361763 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:23:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:58.367522 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:23:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:58.367910 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:23:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:23:58.368195 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:23:58 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.Me0ZuG.mount: Deactivated successfully. Jan 16 21:24:01 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:24:08 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:08.438330 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:24:08 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:08.444219 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:24:08 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:08.444524 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:24:08 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:08.444585 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:24:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:09.467457 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:24:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:09.473689 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:24:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:09.474090 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:24:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:09.474157 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:24:09 api-int.lab.ocpipi.lan approve-csr.sh[20783]: No resources found Jan 16 21:24:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:14.467168 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:24:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:14.472741 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:24:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:14.473469 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:24:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:14.473600 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:24:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:15.467288 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:24:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:15.474206 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:24:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:15.474378 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:24:15 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:15.474433 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:24:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:18.534168 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:24:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:18.551898 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:24:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:18.552272 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:24:18 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:18.552327 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:24:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:19.468414 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:24:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:19.476277 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:24:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:19.477329 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:24:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:19.477491 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:24:21 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:24:28 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.OMPjP4.mount: Deactivated successfully. Jan 16 21:24:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:28.635420 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:24:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:28.640771 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:24:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:28.641212 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:24:28 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:28.641279 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:24:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:29.467209 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:24:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:29.468176 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:24:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:29.476495 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:24:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:29.476707 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:24:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:29.476764 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:24:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:29.477241 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:24:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:29.477320 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:24:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:29.477375 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:24:30 api-int.lab.ocpipi.lan approve-csr.sh[20859]: No resources found Jan 16 21:24:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:38.736167 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:24:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:38.741725 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:24:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:38.742117 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:24:38 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:38.742178 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:24:39 api-int.lab.ocpipi.lan bootkube.sh[15560]: Error: error while checking pod status: timed out waiting for the condition Jan 16 21:24:39 api-int.lab.ocpipi.lan bootkube.sh[15560]: Tearing down temporary bootstrap control plane... Jan 16 21:24:39 api-int.lab.ocpipi.lan bootkube.sh[15560]: Error: error while checking pod status: timed out waiting for the condition Jan 16 21:24:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:39.281768 2579 kubelet.go:2435] "SyncLoop REMOVE" source="file" pods=[openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain] Jan 16 21:24:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:39.282576 2579 kubelet.go:2435] "SyncLoop REMOVE" source="file" pods=[openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain] Jan 16 21:24:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:39.282674 2579 kubelet.go:2435] "SyncLoop REMOVE" source="file" pods=[openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain] Jan 16 21:24:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:39.282749 2579 kubelet.go:2435] "SyncLoop REMOVE" source="file" pods=[kube-system/bootstrap-kube-controller-manager-localhost.localdomain] Jan 16 21:24:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:39.283066 2579 kubelet.go:2435] "SyncLoop REMOVE" source="file" pods=[kube-system/bootstrap-kube-scheduler-localhost.localdomain] Jan 16 21:24:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:39.284345 2579 kuberuntime_container.go:742] "Killing container with a grace period" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" podUID=05c96ce8daffad47cf2b15e2a67753ec containerName="cluster-version-operator" containerID="cri-o://64055b9c804821058ad482716725362f03c181fd5e1434f6414b91ee00f0671f" gracePeriod=130 Jan 16 21:24:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:39.286355 2579 kuberuntime_container.go:742] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" podUID=b8b0f2012ce2b145220be181d7a5aa55 containerName="kube-scheduler" containerID="cri-o://b20d4839bb3528e045a42236e133ba6b232c78e13c82c5e2e3696ddcf72ef998" gracePeriod=30 Jan 16 21:24:39 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:39.286721951Z" level=info msg="Stopping container: 64055b9c804821058ad482716725362f03c181fd5e1434f6414b91ee00f0671f (timeout: 130s)" id=4cfe66bd-b481-4b42-a4f5-0a232f2fce46 name=/runtime.v1.RuntimeService/StopContainer Jan 16 21:24:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:39.290436 2579 kuberuntime_container.go:742] "Killing container with a grace period" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" podUID=a6238b9f1f3a2f2bd2b4b1b0c7962bdd containerName="cloud-credential-operator" containerID="cri-o://53b59e3ddb3be8d72b8c498096ed5c4ebc9db93cc0f39805548940648f1df026" gracePeriod=30 Jan 16 21:24:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:39.291038 2579 kuberuntime_container.go:742] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" podUID=1cb3be1f2df5273e9b77f7050777bcbe containerName="kube-apiserver" containerID="cri-o://d2372fe55957bd2b5d77cdbd933ab77b9ab3973bd9ccdcd935e102a58360d1e6" gracePeriod=135 Jan 16 21:24:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:39.291440 2579 kuberuntime_container.go:742] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" podUID=1cb3be1f2df5273e9b77f7050777bcbe containerName="kube-apiserver-insecure-readyz" containerID="cri-o://832bc24a6eaa010384b99939a6b7ea8f63015c7a33f91f7a705041aac859cca0" gracePeriod=135 Jan 16 21:24:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:39.291666 2579 kuberuntime_container.go:742] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" podUID=c3db590e56a311b869092b2d6b1724e5 containerName="cluster-policy-controller" containerID="cri-o://180e2c10ea2886645a4dfde1732419123fed304db011ebf0e606c741b83af3fe" gracePeriod=30 Jan 16 21:24:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:39.292104 2579 kuberuntime_container.go:742] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" podUID=c3db590e56a311b869092b2d6b1724e5 containerName="kube-controller-manager" containerID="cri-o://5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6" gracePeriod=30 Jan 16 21:24:39 api-int.lab.ocpipi.lan systemd[1]: libpod-3232abd9ed1814fde82a2012389e5479b4bf5a09df8d026c4d9e28bb75b0447b.scope: Deactivated successfully. Jan 16 21:24:39 api-int.lab.ocpipi.lan systemd[1]: libpod-3232abd9ed1814fde82a2012389e5479b4bf5a09df8d026c4d9e28bb75b0447b.scope: Consumed 2.056s CPU time. Jan 16 21:24:39 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:39.316090093Z" level=info msg="Stopping container: b20d4839bb3528e045a42236e133ba6b232c78e13c82c5e2e3696ddcf72ef998 (timeout: 30s)" id=363b79ed-2273-4ca7-a86e-8d1fc28bf1d5 name=/runtime.v1.RuntimeService/StopContainer Jan 16 21:24:39 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:39.324341703Z" level=info msg="Stopping container: 5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6 (timeout: 30s)" id=f10632fd-c4c2-407b-a408-4869bc43825b name=/runtime.v1.RuntimeService/StopContainer Jan 16 21:24:39 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:39.326232892Z" level=info msg="Stopping container: d2372fe55957bd2b5d77cdbd933ab77b9ab3973bd9ccdcd935e102a58360d1e6 (timeout: 135s)" id=00ef27d7-fa78-4282-9373-d1a76e9fa8be name=/runtime.v1.RuntimeService/StopContainer Jan 16 21:24:39 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:39.326567377Z" level=info msg="Stopping container: 53b59e3ddb3be8d72b8c498096ed5c4ebc9db93cc0f39805548940648f1df026 (timeout: 30s)" id=9632ef27-6634-4ffb-8e01-2346a676aaf6 name=/runtime.v1.RuntimeService/StopContainer Jan 16 21:24:39 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:39.326773628Z" level=info msg="Stopping container: 180e2c10ea2886645a4dfde1732419123fed304db011ebf0e606c741b83af3fe (timeout: 30s)" id=075598bb-186b-429d-878e-449df15110c4 name=/runtime.v1.RuntimeService/StopContainer Jan 16 21:24:39 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:39.326680286Z" level=info msg="Stopping container: 832bc24a6eaa010384b99939a6b7ea8f63015c7a33f91f7a705041aac859cca0 (timeout: 135s)" id=9c0990df-201a-4c9f-a4cd-802546dacc2b name=/runtime.v1.RuntimeService/StopContainer Jan 16 21:24:39 api-int.lab.ocpipi.lan systemd[1]: crio-832bc24a6eaa010384b99939a6b7ea8f63015c7a33f91f7a705041aac859cca0.scope: Deactivated successfully. Jan 16 21:24:39 api-int.lab.ocpipi.lan systemd[1]: crio-conmon-832bc24a6eaa010384b99939a6b7ea8f63015c7a33f91f7a705041aac859cca0.scope: Deactivated successfully. Jan 16 21:24:39 api-int.lab.ocpipi.lan systemd[1]: crio-64055b9c804821058ad482716725362f03c181fd5e1434f6414b91ee00f0671f.scope: Deactivated successfully. Jan 16 21:24:39 api-int.lab.ocpipi.lan systemd[1]: crio-64055b9c804821058ad482716725362f03c181fd5e1434f6414b91ee00f0671f.scope: Consumed 1min 1.403s CPU time. Jan 16 21:24:39 api-int.lab.ocpipi.lan systemd[1]: crio-conmon-64055b9c804821058ad482716725362f03c181fd5e1434f6414b91ee00f0671f.scope: Deactivated successfully. Jan 16 21:24:39 api-int.lab.ocpipi.lan systemd[1]: crio-conmon-64055b9c804821058ad482716725362f03c181fd5e1434f6414b91ee00f0671f.scope: Consumed 3.250s CPU time. Jan 16 21:24:39 api-int.lab.ocpipi.lan systemd[1]: crio-5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6.scope: Deactivated successfully. Jan 16 21:24:39 api-int.lab.ocpipi.lan systemd[1]: crio-5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6.scope: Consumed 1min 31.790s CPU time. Jan 16 21:24:39 api-int.lab.ocpipi.lan conmon[16232]: conmon 5aa458e98593fb0138f1 : Failed to open cgroups file: /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3db590e56a311b869092b2d6b1724e5.slice/crio-5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6.scope/memory.events Jan 16 21:24:39 api-int.lab.ocpipi.lan systemd[1]: crio-conmon-5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6.scope: Deactivated successfully. Jan 16 21:24:39 api-int.lab.ocpipi.lan systemd[1]: crio-b20d4839bb3528e045a42236e133ba6b232c78e13c82c5e2e3696ddcf72ef998.scope: Deactivated successfully. Jan 16 21:24:39 api-int.lab.ocpipi.lan conmon[16348]: conmon b20d4839bb3528e045a4 : Failed to open cgroups file: /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8b0f2012ce2b145220be181d7a5aa55.slice/crio-b20d4839bb3528e045a42236e133ba6b232c78e13c82c5e2e3696ddcf72ef998.scope/memory.events Jan 16 21:24:39 api-int.lab.ocpipi.lan systemd[1]: crio-b20d4839bb3528e045a42236e133ba6b232c78e13c82c5e2e3696ddcf72ef998.scope: Consumed 16.753s CPU time. Jan 16 21:24:39 api-int.lab.ocpipi.lan systemd[1]: crio-53b59e3ddb3be8d72b8c498096ed5c4ebc9db93cc0f39805548940648f1df026.scope: Deactivated successfully. Jan 16 21:24:39 api-int.lab.ocpipi.lan systemd[1]: crio-53b59e3ddb3be8d72b8c498096ed5c4ebc9db93cc0f39805548940648f1df026.scope: Consumed 6.343s CPU time. Jan 16 21:24:39 api-int.lab.ocpipi.lan conmon[15648]: conmon 53b59e3ddb3be8d72b8c : Failed to open cgroups file: /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6238b9f1f3a2f2bd2b4b1b0c7962bdd.slice/crio-53b59e3ddb3be8d72b8c498096ed5c4ebc9db93cc0f39805548940648f1df026.scope/memory.events Jan 16 21:24:39 api-int.lab.ocpipi.lan systemd[1]: crio-conmon-53b59e3ddb3be8d72b8c498096ed5c4ebc9db93cc0f39805548940648f1df026.scope: Deactivated successfully. Jan 16 21:24:39 api-int.lab.ocpipi.lan systemd[1]: crio-conmon-180e2c10ea2886645a4dfde1732419123fed304db011ebf0e606c741b83af3fe.scope: Deactivated successfully. Jan 16 21:24:39 api-int.lab.ocpipi.lan systemd[1]: crio-180e2c10ea2886645a4dfde1732419123fed304db011ebf0e606c741b83af3fe.scope: Deactivated successfully. Jan 16 21:24:39 api-int.lab.ocpipi.lan systemd[1]: crio-180e2c10ea2886645a4dfde1732419123fed304db011ebf0e606c741b83af3fe.scope: Consumed 40.724s CPU time. Jan 16 21:24:39 api-int.lab.ocpipi.lan systemd[1]: crio-conmon-b20d4839bb3528e045a42236e133ba6b232c78e13c82c5e2e3696ddcf72ef998.scope: Deactivated successfully. Jan 16 21:24:39 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-98b41311de6388be94a54dd2c0bc249280b3ad89116b49a4a90a2029a2158fd5-merged.mount: Deactivated successfully. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-a89d95d722f1bbf7948ce2e3bab39b0d622255e5cea9b0c72a06a448c72bbb03-merged.mount: Deactivated successfully. Jan 16 21:24:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:40.043202841Z" level=info msg="Stopped container 832bc24a6eaa010384b99939a6b7ea8f63015c7a33f91f7a705041aac859cca0: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/kube-apiserver-insecure-readyz" id=9c0990df-201a-4c9f-a4cd-802546dacc2b name=/runtime.v1.RuntimeService/StopContainer Jan 16 21:24:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:40.068715622Z" level=info msg="Stopped container b20d4839bb3528e045a42236e133ba6b232c78e13c82c5e2e3696ddcf72ef998: kube-system/bootstrap-kube-scheduler-localhost.localdomain/kube-scheduler" id=363b79ed-2273-4ca7-a86e-8d1fc28bf1d5 name=/runtime.v1.RuntimeService/StopContainer Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3232abd9ed1814fde82a2012389e5479b4bf5a09df8d026c4d9e28bb75b0447b-userdata-shm.mount: Deactivated successfully. Jan 16 21:24:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:40.090796089Z" level=info msg="Stopping pod sandbox: 698793765b36d11ab57ecfa5b37f206a0c00f023a38088216f8e7b16931b26a4" id=f4c79692-eb82-4944-8ff8-b754d48760fa name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-01efd3614d86cb318e0a0ca9a931e22c56e68fda7247d7047525563b5df5a43c-merged.mount: Deactivated successfully. Jan 16 21:24:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:40.100201908Z" level=info msg="Stopped container 64055b9c804821058ad482716725362f03c181fd5e1434f6414b91ee00f0671f: openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain/cluster-version-operator" id=4cfe66bd-b481-4b42-a4f5-0a232f2fce46 name=/runtime.v1.RuntimeService/StopContainer Jan 16 21:24:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:40.106521786Z" level=info msg="Stopping pod sandbox: f7a6f707f3bda3601ec15d7bd6975ac503d8a121077b16831d7ae849142883fd" id=78f9b94e-16f6-4336-bc6a-b3ffdea980ca name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-b859fcd8601c39ba98060c303667c8aea9f6101b0ab23e98e3212cfa325923d6-merged.mount: Deactivated successfully. Jan 16 21:24:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:40.118402812Z" level=info msg="Stopped pod sandbox: 698793765b36d11ab57ecfa5b37f206a0c00f023a38088216f8e7b16931b26a4" id=f4c79692-eb82-4944-8ff8-b754d48760fa name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:24:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:40.134362528Z" level=info msg="Stopped pod sandbox: f7a6f707f3bda3601ec15d7bd6975ac503d8a121077b16831d7ae849142883fd" id=78f9b94e-16f6-4336-bc6a-b3ffdea980ca name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:24:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:40.172575604Z" level=info msg="Stopped container 53b59e3ddb3be8d72b8c498096ed5c4ebc9db93cc0f39805548940648f1df026: openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain/cloud-credential-operator" id=9632ef27-6634-4ffb-8e01-2346a676aaf6 name=/runtime.v1.RuntimeService/StopContainer Jan 16 21:24:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:40.174082136Z" level=info msg="Stopping pod sandbox: d33fcfd348cf2c3a24fa9aa431b71f77fb2351c6c87e2ab3eb4270f280959111" id=47e12bc3-6554-48b5-b2d3-6f23459c7238 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:24:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:40.196599609Z" level=info msg="Stopped container 180e2c10ea2886645a4dfde1732419123fed304db011ebf0e606c741b83af3fe: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/cluster-policy-controller" id=075598bb-186b-429d-878e-449df15110c4 name=/runtime.v1.RuntimeService/StopContainer Jan 16 21:24:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:40.205169057Z" level=info msg="Stopped pod sandbox: d33fcfd348cf2c3a24fa9aa431b71f77fb2351c6c87e2ab3eb4270f280959111" id=47e12bc3-6554-48b5-b2d3-6f23459c7238 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:24:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:40.206557553Z" level=info msg="Stopped container 5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/kube-controller-manager" id=f10632fd-c4c2-407b-a408-4869bc43825b name=/runtime.v1.RuntimeService/StopContainer Jan 16 21:24:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:40.209080780Z" level=info msg="Stopping pod sandbox: df7414d7cd6ee869f535f4228fa7b7b23f6ac8632001d1dce75dcc25250e3f1b" id=30375dc9-cbd7-44b6-b618-6ce324589733 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:24:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:40.211666253Z" level=info msg="Stopped pod sandbox: df7414d7cd6ee869f535f4228fa7b7b23f6ac8632001d1dce75dcc25250e3f1b" id=30375dc9-cbd7-44b6-b618-6ce324589733 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:24:40 api-int.lab.ocpipi.lan bootkube.sh[14240]: Using /opt/openshift/auth/kubeconfig as KUBECONFIG Jan 16 21:24:40 api-int.lab.ocpipi.lan bootkube.sh[14240]: Gathering cluster resources ... Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.275355 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-logs\") pod \"b8b0f2012ce2b145220be181d7a5aa55\" (UID: \"b8b0f2012ce2b145220be181d7a5aa55\") " Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.275476 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-secrets\") pod \"b8b0f2012ce2b145220be181d7a5aa55\" (UID: \"b8b0f2012ce2b145220be181d7a5aa55\") " Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.275682 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-secrets" (OuterVolumeSpecName: "secrets") pod "b8b0f2012ce2b145220be181d7a5aa55" (UID: "b8b0f2012ce2b145220be181d7a5aa55"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.275763 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-logs" (OuterVolumeSpecName: "logs") pod "b8b0f2012ce2b145220be181d7a5aa55" (UID: "b8b0f2012ce2b145220be181d7a5aa55"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:24:40 api-int.lab.ocpipi.lan sudo[21126]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get nodes -o jsonpath -l node-role.kubernetes.io/master --template {range .items[*]}{.metadata.name}{"\n"}{end} Jan 16 21:24:40 api-int.lab.ocpipi.lan sudo[21130]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get pods --all-namespaces --template {{ range .items }}{{ $name := .metadata.name }}{{ $ns := .metadata.namespace }}{{ range .spec.containers }}-n {{ $ns }} {{ $name }} -c {{ .name }}{{ "\n" }}{{ end }}{{ range .spec.initContainers }}-n {{ $ns }} {{ $name }} -c {{ .name }}{{ "\n" }}{{ end }}{{ end }} Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: Created slice User Slice of UID 0. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: Starting User Runtime Directory /run/user/0... Jan 16 21:24:40 api-int.lab.ocpipi.lan sudo[21122]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get nodes -o jsonpath --template {range .items[*]}{.metadata.name}{"\n"}{end} Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.376749 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-config\") pod \"c3db590e56a311b869092b2d6b1724e5\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.376890 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-kubeconfig\") pod \"05c96ce8daffad47cf2b15e2a67753ec\" (UID: \"05c96ce8daffad47cf2b15e2a67753ec\") " Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.377074 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-etc-kubernetes-cloud\") pod \"c3db590e56a311b869092b2d6b1724e5\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.377118 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-ssl-certs-host\") pod \"c3db590e56a311b869092b2d6b1724e5\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.377152 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-logs\") pod \"c3db590e56a311b869092b2d6b1724e5\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.377190 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a6238b9f1f3a2f2bd2b4b1b0c7962bdd-secrets\") pod \"a6238b9f1f3a2f2bd2b4b1b0c7962bdd\" (UID: \"a6238b9f1f3a2f2bd2b4b1b0c7962bdd\") " Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.377224 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-secrets\") pod \"c3db590e56a311b869092b2d6b1724e5\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.377262 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-etc-ssl-certs\") pod \"05c96ce8daffad47cf2b15e2a67753ec\" (UID: \"05c96ce8daffad47cf2b15e2a67753ec\") " Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.377338 2579 reconciler_common.go:300] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-logs\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.377370 2579 reconciler_common.go:300] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-secrets\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.377432 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "05c96ce8daffad47cf2b15e2a67753ec" (UID: "05c96ce8daffad47cf2b15e2a67753ec"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.377496 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-config" (OuterVolumeSpecName: "config") pod "c3db590e56a311b869092b2d6b1724e5" (UID: "c3db590e56a311b869092b2d6b1724e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.377568 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-kubeconfig" (OuterVolumeSpecName: "kubeconfig") pod "05c96ce8daffad47cf2b15e2a67753ec" (UID: "05c96ce8daffad47cf2b15e2a67753ec"). InnerVolumeSpecName "kubeconfig". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.377609 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "c3db590e56a311b869092b2d6b1724e5" (UID: "c3db590e56a311b869092b2d6b1724e5"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.377648 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "c3db590e56a311b869092b2d6b1724e5" (UID: "c3db590e56a311b869092b2d6b1724e5"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.377685 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-logs" (OuterVolumeSpecName: "logs") pod "c3db590e56a311b869092b2d6b1724e5" (UID: "c3db590e56a311b869092b2d6b1724e5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.377726 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6238b9f1f3a2f2bd2b4b1b0c7962bdd-secrets" (OuterVolumeSpecName: "secrets") pod "a6238b9f1f3a2f2bd2b4b1b0c7962bdd" (UID: "a6238b9f1f3a2f2bd2b4b1b0c7962bdd"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.377766 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-secrets" (OuterVolumeSpecName: "secrets") pod "c3db590e56a311b869092b2d6b1724e5" (UID: "c3db590e56a311b869092b2d6b1724e5"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:24:40 api-int.lab.ocpipi.lan sudo[21134]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get pods -l apiserver=true --all-namespaces --template {{ range .items }}-n {{ .metadata.namespace }} {{ .metadata.name }}{{ "\n" }}{{ end }} Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: Finished User Runtime Directory /run/user/0. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: Starting User Manager for UID 0... Jan 16 21:24:40 api-int.lab.ocpipi.lan sudo[21140]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get apiservices -o json Jan 16 21:24:40 api-int.lab.ocpipi.lan sudo[21147]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get clusteroperators -o json Jan 16 21:24:40 api-int.lab.ocpipi.lan sudo[21152]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get clusterversion -o json Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[21157]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.473034 2579 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=05c96ce8daffad47cf2b15e2a67753ec path="/var/lib/kubelet/pods/05c96ce8daffad47cf2b15e2a67753ec/volumes" Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.473715 2579 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=a6238b9f1f3a2f2bd2b4b1b0c7962bdd path="/var/lib/kubelet/pods/a6238b9f1f3a2f2bd2b4b1b0c7962bdd/volumes" Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.474535 2579 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=b8b0f2012ce2b145220be181d7a5aa55 path="/var/lib/kubelet/pods/b8b0f2012ce2b145220be181d7a5aa55/volumes" Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.475225 2579 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=c3db590e56a311b869092b2d6b1724e5 path="/var/lib/kubelet/pods/c3db590e56a311b869092b2d6b1724e5/volumes" Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.494095 2579 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-config\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.494275 2579 reconciler_common.go:300] "Volume detached for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-kubeconfig\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.494308 2579 reconciler_common.go:300] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-ssl-certs-host\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.494336 2579 reconciler_common.go:300] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-logs\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.494360 2579 reconciler_common.go:300] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-etc-kubernetes-cloud\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.494462 2579 reconciler_common.go:300] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-secrets\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.494492 2579 reconciler_common.go:300] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a6238b9f1f3a2f2bd2b4b1b0c7962bdd-secrets\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.494515 2579 reconciler_common.go:300] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-etc-ssl-certs\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:24:40 api-int.lab.ocpipi.lan sudo[21159]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get configmaps --all-namespaces -o json Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: Removed slice libcontainer container kubepods-besteffort-poda6238b9f1f3a2f2bd2b4b1b0c7962bdd.slice. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: kubepods-besteffort-poda6238b9f1f3a2f2bd2b4b1b0c7962bdd.slice: Consumed 6.343s CPU time. Jan 16 21:24:40 api-int.lab.ocpipi.lan sudo[21164]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get csr -o json Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: Removed slice libcontainer container kubepods-besteffort-pod05c96ce8daffad47cf2b15e2a67753ec.slice. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: kubepods-besteffort-pod05c96ce8daffad47cf2b15e2a67753ec.slice: Consumed 1min 1.403s CPU time. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: Removed slice libcontainer container kubepods-burstable-podb8b0f2012ce2b145220be181d7a5aa55.slice. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: kubepods-burstable-podb8b0f2012ce2b145220be181d7a5aa55.slice: Consumed 19.640s CPU time. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: Removed slice libcontainer container kubepods-burstable-podc3db590e56a311b869092b2d6b1724e5.slice. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: kubepods-burstable-podc3db590e56a311b869092b2d6b1724e5.slice: Consumed 2min 20.839s CPU time. Jan 16 21:24:40 api-int.lab.ocpipi.lan sudo[21179]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get kubeapiserver -o json Jan 16 21:24:40 api-int.lab.ocpipi.lan sudo[21200]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get machines --all-namespaces -o json Jan 16 21:24:40 api-int.lab.ocpipi.lan sudo[21175]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get events --all-namespaces -o json Jan 16 21:24:40 api-int.lab.ocpipi.lan sudo[21185]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get kubecontrollermanager -o json Jan 16 21:24:40 api-int.lab.ocpipi.lan sudo[21170]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get endpoints --all-namespaces -o json Jan 16 21:24:40 api-int.lab.ocpipi.lan sudo[21214]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get namespaces -o json Jan 16 21:24:40 api-int.lab.ocpipi.lan sudo[21206]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get machineconfigpools -o json Jan 16 21:24:40 api-int.lab.ocpipi.lan sudo[21210]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get machineconfigs -o json Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[21157]: Queued start job for default target Main User Target. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[21157]: Created slice User Application Slice. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[21157]: Started Daily Cleanup of User's Temporary Directories. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[21157]: Reached target Paths. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[21157]: Reached target Timers. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[21157]: Starting D-Bus User Message Bus Socket... Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[21157]: Starting Create User's Volatile Files and Directories... Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.816446 2579 generic.go:334] "Generic (PLEG): container finished" podID=b8b0f2012ce2b145220be181d7a5aa55 containerID="b20d4839bb3528e045a42236e133ba6b232c78e13c82c5e2e3696ddcf72ef998" exitCode=0 Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.816655 2579 scope.go:115] "RemoveContainer" containerID="b20d4839bb3528e045a42236e133ba6b232c78e13c82c5e2e3696ddcf72ef998" Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[21157]: Listening on D-Bus User Message Bus Socket. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[21157]: Reached target Sockets. Jan 16 21:24:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:40.823358464Z" level=info msg="Removing container: b20d4839bb3528e045a42236e133ba6b232c78e13c82c5e2e3696ddcf72ef998" id=7eb4eafc-c3eb-48e0-bca8-aba9939f2c48 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.827430 2579 generic.go:334] "Generic (PLEG): container finished" podID=a6238b9f1f3a2f2bd2b4b1b0c7962bdd containerID="53b59e3ddb3be8d72b8c498096ed5c4ebc9db93cc0f39805548940648f1df026" exitCode=0 Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[21157]: Finished Create User's Volatile Files and Directories. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[21157]: Reached target Basic System. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[21157]: Reached target Main User Target. Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.835162 2579 generic.go:334] "Generic (PLEG): container finished" podID=05c96ce8daffad47cf2b15e2a67753ec containerID="64055b9c804821058ad482716725362f03c181fd5e1434f6414b91ee00f0671f" exitCode=0 Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[21157]: Startup finished in 342ms. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: Started User Manager for UID 0. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: Started Session c25 of User root. Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.844670 2579 generic.go:334] "Generic (PLEG): container finished" podID=1cb3be1f2df5273e9b77f7050777bcbe containerID="832bc24a6eaa010384b99939a6b7ea8f63015c7a33f91f7a705041aac859cca0" exitCode=0 Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: Started Session c26 of User root. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: Started Session c27 of User root. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: Started Session c28 of User root. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: Started Session c29 of User root. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: Started Session c30 of User root. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: Started Session c31 of User root. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: Started Session c32 of User root. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: Started Session c33 of User root. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: Started Session c34 of User root. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: Started Session c35 of User root. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: Started Session c36 of User root. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: Started Session c37 of User root. Jan 16 21:24:40 api-int.lab.ocpipi.lan sudo[21219]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get nodes -o json Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: Started Session c38 of User root. Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.881398 2579 generic.go:334] "Generic (PLEG): container finished" podID=c3db590e56a311b869092b2d6b1724e5 containerID="5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6" exitCode=0 Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.881443 2579 generic.go:334] "Generic (PLEG): container finished" podID=c3db590e56a311b869092b2d6b1724e5 containerID="180e2c10ea2886645a4dfde1732419123fed304db011ebf0e606c741b83af3fe" exitCode=0 Jan 16 21:24:40 api-int.lab.ocpipi.lan sudo[21230]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get pods --all-namespaces -o json Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: Started Session c39 of User root. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: Started Session c40 of User root. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: Started Session c41 of User root. Jan 16 21:24:40 api-int.lab.ocpipi.lan sudo[21225]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get openshiftapiserver -o json Jan 16 21:24:40 api-int.lab.ocpipi.lan sudo[21235]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get rolebindings --all-namespaces -o json Jan 16 21:24:40 api-int.lab.ocpipi.lan sudo[21126]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:24:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:40.923373951Z" level=info msg="Removed container b20d4839bb3528e045a42236e133ba6b232c78e13c82c5e2e3696ddcf72ef998: kube-system/bootstrap-kube-scheduler-localhost.localdomain/kube-scheduler" id=7eb4eafc-c3eb-48e0-bca8-aba9939f2c48 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:24:40 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:40.929745 2579 scope.go:115] "RemoveContainer" containerID="5caf0d427b79aad6bc0b06abe3c0667fd38eb99b83c2fa58f5e60ffc61d0dbe4" Jan 16 21:24:40 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:40.945505484Z" level=info msg="Removing container: 5caf0d427b79aad6bc0b06abe3c0667fd38eb99b83c2fa58f5e60ffc61d0dbe4" id=89d3adcd-a7c9-492b-b3d0-e1499fe4a633 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:24:40 api-int.lab.ocpipi.lan sudo[21122]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-f9e375c40380ca0b6e2162110aca3950fe86098168e0a5e459281c7bf82bfa1c-merged.mount: Deactivated successfully. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-caa5d6560dd5a1e61ce94bc4a09c377eeba8b90067673939733df634b5e53410-merged.mount: Deactivated successfully. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: run-netns-543bd603\x2d0444\x2d4cc1\x2d88c5\x2d54b37db82231.mount: Deactivated successfully. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: run-ipcns-543bd603\x2d0444\x2d4cc1\x2d88c5\x2d54b37db82231.mount: Deactivated successfully. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: run-utsns-543bd603\x2d0444\x2d4cc1\x2d88c5\x2d54b37db82231.mount: Deactivated successfully. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: run-containers-storage-overlay\x2dcontainers-698793765b36d11ab57ecfa5b37f206a0c00f023a38088216f8e7b16931b26a4-userdata-shm.mount: Deactivated successfully. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-dcb8396f4037d0be00d14570c1ea0b73d8fb1dbbba0972b4779a3ab9201dd830-merged.mount: Deactivated successfully. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: run-netns-9f7764f6\x2d6e7f\x2d44a9\x2d8f0d\x2d5aaa42d4a008.mount: Deactivated successfully. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: run-ipcns-9f7764f6\x2d6e7f\x2d44a9\x2d8f0d\x2d5aaa42d4a008.mount: Deactivated successfully. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: run-utsns-9f7764f6\x2d6e7f\x2d44a9\x2d8f0d\x2d5aaa42d4a008.mount: Deactivated successfully. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-470d0cf9a70c0fab92e4e49d06636f3a4f1431b019e53df02c31a080420d216c-merged.mount: Deactivated successfully. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: run-containers-storage-overlay\x2dcontainers-df7414d7cd6ee869f535f4228fa7b7b23f6ac8632001d1dce75dcc25250e3f1b-userdata-shm.mount: Deactivated successfully. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-f751fca58e332a21bee4e561abf317412d9be8f4f2df27b1c06c9cdb6d948f5e-merged.mount: Deactivated successfully. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: run-netns-117decf4\x2dfc0d\x2d4ab3\x2db115\x2d7d5720d27855.mount: Deactivated successfully. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: run-ipcns-117decf4\x2dfc0d\x2d4ab3\x2db115\x2d7d5720d27855.mount: Deactivated successfully. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: run-utsns-117decf4\x2dfc0d\x2d4ab3\x2db115\x2d7d5720d27855.mount: Deactivated successfully. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: run-containers-storage-overlay\x2dcontainers-d33fcfd348cf2c3a24fa9aa431b71f77fb2351c6c87e2ab3eb4270f280959111-userdata-shm.mount: Deactivated successfully. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-ae30ef32fa27b793cd033b04bb16108538cd33a2aab3fc8f5a73662f560bfff5-merged.mount: Deactivated successfully. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: run-netns-b948c65c\x2d7018\x2d41b0\x2d9a58\x2dabfcd99aebb8.mount: Deactivated successfully. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: run-ipcns-b948c65c\x2d7018\x2d41b0\x2d9a58\x2dabfcd99aebb8.mount: Deactivated successfully. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: run-utsns-b948c65c\x2d7018\x2d41b0\x2d9a58\x2dabfcd99aebb8.mount: Deactivated successfully. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: run-containers-storage-overlay\x2dcontainers-f7a6f707f3bda3601ec15d7bd6975ac503d8a121077b16831d7ae849142883fd-userdata-shm.mount: Deactivated successfully. Jan 16 21:24:40 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-2b35e99198985bd9b3aa36a7278f31e84335df8ea59e35725989d947190e7c49-merged.mount: Deactivated successfully. Jan 16 21:24:41 api-int.lab.ocpipi.lan sudo[21130]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:24:41 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-7a93c67bc5e480ea73b66a9b15317a3b15c4bb78da4e2319b605a7a38f59e0e7-merged.mount: Deactivated successfully. Jan 16 21:24:41 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:41.035768152Z" level=info msg="Removed container 5caf0d427b79aad6bc0b06abe3c0667fd38eb99b83c2fa58f5e60ffc61d0dbe4: kube-system/bootstrap-kube-scheduler-localhost.localdomain/kube-scheduler" id=89d3adcd-a7c9-492b-b3d0-e1499fe4a633 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:41.038487 2579 scope.go:115] "RemoveContainer" containerID="b20d4839bb3528e045a42236e133ba6b232c78e13c82c5e2e3696ddcf72ef998" Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:24:41.041629 2579 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b20d4839bb3528e045a42236e133ba6b232c78e13c82c5e2e3696ddcf72ef998\": container with ID starting with b20d4839bb3528e045a42236e133ba6b232c78e13c82c5e2e3696ddcf72ef998 not found: ID does not exist" containerID="b20d4839bb3528e045a42236e133ba6b232c78e13c82c5e2e3696ddcf72ef998" Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:41.041756 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:b20d4839bb3528e045a42236e133ba6b232c78e13c82c5e2e3696ddcf72ef998} err="failed to get container status \"b20d4839bb3528e045a42236e133ba6b232c78e13c82c5e2e3696ddcf72ef998\": rpc error: code = NotFound desc = could not find container \"b20d4839bb3528e045a42236e133ba6b232c78e13c82c5e2e3696ddcf72ef998\": container with ID starting with b20d4839bb3528e045a42236e133ba6b232c78e13c82c5e2e3696ddcf72ef998 not found: ID does not exist" Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:41.041784 2579 scope.go:115] "RemoveContainer" containerID="5caf0d427b79aad6bc0b06abe3c0667fd38eb99b83c2fa58f5e60ffc61d0dbe4" Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:24:41.043135 2579 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5caf0d427b79aad6bc0b06abe3c0667fd38eb99b83c2fa58f5e60ffc61d0dbe4\": container with ID starting with 5caf0d427b79aad6bc0b06abe3c0667fd38eb99b83c2fa58f5e60ffc61d0dbe4 not found: ID does not exist" containerID="5caf0d427b79aad6bc0b06abe3c0667fd38eb99b83c2fa58f5e60ffc61d0dbe4" Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:41.043184 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:5caf0d427b79aad6bc0b06abe3c0667fd38eb99b83c2fa58f5e60ffc61d0dbe4} err="failed to get container status \"5caf0d427b79aad6bc0b06abe3c0667fd38eb99b83c2fa58f5e60ffc61d0dbe4\": rpc error: code = NotFound desc = could not find container \"5caf0d427b79aad6bc0b06abe3c0667fd38eb99b83c2fa58f5e60ffc61d0dbe4\": container with ID starting with 5caf0d427b79aad6bc0b06abe3c0667fd38eb99b83c2fa58f5e60ffc61d0dbe4 not found: ID does not exist" Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:41.043204 2579 scope.go:115] "RemoveContainer" containerID="53b59e3ddb3be8d72b8c498096ed5c4ebc9db93cc0f39805548940648f1df026" Jan 16 21:24:41 api-int.lab.ocpipi.lan sudo[21134]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:24:41 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:41.050110426Z" level=info msg="Removing container: 53b59e3ddb3be8d72b8c498096ed5c4ebc9db93cc0f39805548940648f1df026" id=83e0f525-51d1-4bfd-992e-e513a35e3d9d name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:24:41 api-int.lab.ocpipi.lan sudo[21253]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get secrets --all-namespaces Jan 16 21:24:41 api-int.lab.ocpipi.lan sudo[21261]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get secrets --all-namespaces -o=custom-columns=NAMESPACE:.metadata.namespace,NAME:.metadata.name,TYPE:.type,ANNOTATIONS:.metadata.annotations Jan 16 21:24:41 api-int.lab.ocpipi.lan sudo[21140]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:24:41 api-int.lab.ocpipi.lan sudo[21147]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:24:41 api-int.lab.ocpipi.lan sudo[21244]: root : PWD=/var/opt/openshift ; USER=root ; ENV=KUBECONFIG=/opt/openshift/auth/kubeconfig ; COMMAND=/bin/oc --request-timeout=5s get roles --all-namespaces -o json Jan 16 21:24:41 api-int.lab.ocpipi.lan sudo[21152]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:24:41 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:41.139759116Z" level=info msg="Removed container 53b59e3ddb3be8d72b8c498096ed5c4ebc9db93cc0f39805548940648f1df026: openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain/cloud-credential-operator" id=83e0f525-51d1-4bfd-992e-e513a35e3d9d name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:24:41 api-int.lab.ocpipi.lan sudo[21159]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:41.142710 2579 scope.go:115] "RemoveContainer" containerID="53b59e3ddb3be8d72b8c498096ed5c4ebc9db93cc0f39805548940648f1df026" Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:24:41.154551 2579 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53b59e3ddb3be8d72b8c498096ed5c4ebc9db93cc0f39805548940648f1df026\": container with ID starting with 53b59e3ddb3be8d72b8c498096ed5c4ebc9db93cc0f39805548940648f1df026 not found: ID does not exist" containerID="53b59e3ddb3be8d72b8c498096ed5c4ebc9db93cc0f39805548940648f1df026" Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:41.154675 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:53b59e3ddb3be8d72b8c498096ed5c4ebc9db93cc0f39805548940648f1df026} err="failed to get container status \"53b59e3ddb3be8d72b8c498096ed5c4ebc9db93cc0f39805548940648f1df026\": rpc error: code = NotFound desc = could not find container \"53b59e3ddb3be8d72b8c498096ed5c4ebc9db93cc0f39805548940648f1df026\": container with ID starting with 53b59e3ddb3be8d72b8c498096ed5c4ebc9db93cc0f39805548940648f1df026 not found: ID does not exist" Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:41.154705 2579 scope.go:115] "RemoveContainer" containerID="64055b9c804821058ad482716725362f03c181fd5e1434f6414b91ee00f0671f" Jan 16 21:24:41 api-int.lab.ocpipi.lan sudo[21164]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:24:41 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:41.166338243Z" level=info msg="Removing container: 64055b9c804821058ad482716725362f03c181fd5e1434f6414b91ee00f0671f" id=a26e0e04-d3f8-428f-b1fa-b8d730639547 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:24:41 api-int.lab.ocpipi.lan sudo[21179]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:24:41 api-int.lab.ocpipi.lan sudo[21200]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:24:41 api-int.lab.ocpipi.lan sudo[21175]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:24:41 api-int.lab.ocpipi.lan sudo[21170]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:24:41 api-int.lab.ocpipi.lan sudo[21185]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:24:41 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:41.273320865Z" level=info msg="Removed container 64055b9c804821058ad482716725362f03c181fd5e1434f6414b91ee00f0671f: openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain/cluster-version-operator" id=a26e0e04-d3f8-428f-b1fa-b8d730639547 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:41.275067 2579 scope.go:115] "RemoveContainer" containerID="64055b9c804821058ad482716725362f03c181fd5e1434f6414b91ee00f0671f" Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:24:41.276037 2579 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64055b9c804821058ad482716725362f03c181fd5e1434f6414b91ee00f0671f\": container with ID starting with 64055b9c804821058ad482716725362f03c181fd5e1434f6414b91ee00f0671f not found: ID does not exist" containerID="64055b9c804821058ad482716725362f03c181fd5e1434f6414b91ee00f0671f" Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:41.276100 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:64055b9c804821058ad482716725362f03c181fd5e1434f6414b91ee00f0671f} err="failed to get container status \"64055b9c804821058ad482716725362f03c181fd5e1434f6414b91ee00f0671f\": rpc error: code = NotFound desc = could not find container \"64055b9c804821058ad482716725362f03c181fd5e1434f6414b91ee00f0671f\": container with ID starting with 64055b9c804821058ad482716725362f03c181fd5e1434f6414b91ee00f0671f not found: ID does not exist" Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:41.276125 2579 scope.go:115] "RemoveContainer" containerID="5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6" Jan 16 21:24:41 api-int.lab.ocpipi.lan sudo[21214]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:24:41 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:41.290299133Z" level=info msg="Removing container: 5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6" id=aa8de68c-343f-4deb-be2a-351373c86880 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:24:41 api-int.lab.ocpipi.lan sudo[21206]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:24:41 api-int.lab.ocpipi.lan sudo[21210]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:24:41 api-int.lab.ocpipi.lan systemd[1]: Started Session c42 of User root. Jan 16 21:24:41 api-int.lab.ocpipi.lan systemd[1]: Started Session c43 of User root. Jan 16 21:24:41 api-int.lab.ocpipi.lan systemd[1]: Started Session c44 of User root. Jan 16 21:24:41 api-int.lab.ocpipi.lan systemd[1]: Started Session c45 of User root. Jan 16 21:24:41 api-int.lab.ocpipi.lan systemd[1]: Started Session c46 of User root. Jan 16 21:24:41 api-int.lab.ocpipi.lan systemd[1]: Started Session c47 of User root. Jan 16 21:24:41 api-int.lab.ocpipi.lan systemd[1]: bootkube.service: Main process exited, code=exited, status=1/FAILURE Jan 16 21:24:41 api-int.lab.ocpipi.lan systemd[1]: Started Session c48 of User root. Jan 16 21:24:41 api-int.lab.ocpipi.lan systemd[1]: bootkube.service: Failed with result 'exit-code'. Jan 16 21:24:41 api-int.lab.ocpipi.lan systemd[1]: bootkube.service: Consumed 29.682s CPU time. Jan 16 21:24:41 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:41.451477285Z" level=info msg="Removed container 5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/kube-controller-manager" id=aa8de68c-343f-4deb-be2a-351373c86880 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:41.452760 2579 scope.go:115] "RemoveContainer" containerID="180e2c10ea2886645a4dfde1732419123fed304db011ebf0e606c741b83af3fe" Jan 16 21:24:41 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:41.456883350Z" level=info msg="Removing container: 180e2c10ea2886645a4dfde1732419123fed304db011ebf0e606c741b83af3fe" id=72233907-6083-45aa-a676-d0d6886c40f6 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:24:41 api-int.lab.ocpipi.lan sudo[21219]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:24:41 api-int.lab.ocpipi.lan sudo[21230]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:24:41 api-int.lab.ocpipi.lan sudo[21235]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:24:41 api-int.lab.ocpipi.lan sudo[21225]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:24:41 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:41.533892901Z" level=info msg="Removed container 180e2c10ea2886645a4dfde1732419123fed304db011ebf0e606c741b83af3fe: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/cluster-policy-controller" id=72233907-6083-45aa-a676-d0d6886c40f6 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:41.534562 2579 scope.go:115] "RemoveContainer" containerID="fa576909424de31254e1c4275c6fb0976b920ec554d33aefeb4a8e3f46464ffa" Jan 16 21:24:41 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:41.545076362Z" level=info msg="Removing container: fa576909424de31254e1c4275c6fb0976b920ec554d33aefeb4a8e3f46464ffa" id=de38c529-073e-4049-b01c-735d33077478 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:24:41 api-int.lab.ocpipi.lan sudo[21253]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:24:41 api-int.lab.ocpipi.lan sudo[21261]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:24:41 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-cea6748c83e1dbad144683128e20970b9cfff47209de2af95889923b267d3cbd-merged.mount: Deactivated successfully. Jan 16 21:24:41 api-int.lab.ocpipi.lan sudo[21244]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jan 16 21:24:41 api-int.lab.ocpipi.lan sudo[21244]: pam_unix(sudo:session): session closed for user root Jan 16 21:24:41 api-int.lab.ocpipi.lan systemd[1]: session-c48.scope: Deactivated successfully. Jan 16 21:24:41 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:41.623604064Z" level=info msg="Removed container fa576909424de31254e1c4275c6fb0976b920ec554d33aefeb4a8e3f46464ffa: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/kube-controller-manager" id=de38c529-073e-4049-b01c-735d33077478 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:41.624232 2579 scope.go:115] "RemoveContainer" containerID="5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6" Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:24:41.626722 2579 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6\": container with ID starting with 5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6 not found: ID does not exist" containerID="5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6" Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:41.627108 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6} err="failed to get container status \"5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6\": rpc error: code = NotFound desc = could not find container \"5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6\": container with ID starting with 5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6 not found: ID does not exist" Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:41.627450 2579 scope.go:115] "RemoveContainer" containerID="180e2c10ea2886645a4dfde1732419123fed304db011ebf0e606c741b83af3fe" Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:24:41.628230 2579 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"180e2c10ea2886645a4dfde1732419123fed304db011ebf0e606c741b83af3fe\": container with ID starting with 180e2c10ea2886645a4dfde1732419123fed304db011ebf0e606c741b83af3fe not found: ID does not exist" containerID="180e2c10ea2886645a4dfde1732419123fed304db011ebf0e606c741b83af3fe" Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:41.628346 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:180e2c10ea2886645a4dfde1732419123fed304db011ebf0e606c741b83af3fe} err="failed to get container status \"180e2c10ea2886645a4dfde1732419123fed304db011ebf0e606c741b83af3fe\": rpc error: code = NotFound desc = could not find container \"180e2c10ea2886645a4dfde1732419123fed304db011ebf0e606c741b83af3fe\": container with ID starting with 180e2c10ea2886645a4dfde1732419123fed304db011ebf0e606c741b83af3fe not found: ID does not exist" Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:41.628370 2579 scope.go:115] "RemoveContainer" containerID="fa576909424de31254e1c4275c6fb0976b920ec554d33aefeb4a8e3f46464ffa" Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:24:41.630512 2579 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa576909424de31254e1c4275c6fb0976b920ec554d33aefeb4a8e3f46464ffa\": container with ID starting with fa576909424de31254e1c4275c6fb0976b920ec554d33aefeb4a8e3f46464ffa not found: ID does not exist" containerID="fa576909424de31254e1c4275c6fb0976b920ec554d33aefeb4a8e3f46464ffa" Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:41.630606 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:fa576909424de31254e1c4275c6fb0976b920ec554d33aefeb4a8e3f46464ffa} err="failed to get container status \"fa576909424de31254e1c4275c6fb0976b920ec554d33aefeb4a8e3f46464ffa\": rpc error: code = NotFound desc = could not find container \"fa576909424de31254e1c4275c6fb0976b920ec554d33aefeb4a8e3f46464ffa\": container with ID starting with fa576909424de31254e1c4275c6fb0976b920ec554d33aefeb4a8e3f46464ffa not found: ID does not exist" Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:41.630628 2579 scope.go:115] "RemoveContainer" containerID="5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6" Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:41.633240 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6} err="failed to get container status \"5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6\": rpc error: code = NotFound desc = could not find container \"5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6\": container with ID starting with 5aa458e98593fb0138f1586f221c49faf3e193d14178d1b9bad9ecd6f079c1b6 not found: ID does not exist" Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:41.633315 2579 scope.go:115] "RemoveContainer" containerID="180e2c10ea2886645a4dfde1732419123fed304db011ebf0e606c741b83af3fe" Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:41.634626 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:180e2c10ea2886645a4dfde1732419123fed304db011ebf0e606c741b83af3fe} err="failed to get container status \"180e2c10ea2886645a4dfde1732419123fed304db011ebf0e606c741b83af3fe\": rpc error: code = NotFound desc = could not find container \"180e2c10ea2886645a4dfde1732419123fed304db011ebf0e606c741b83af3fe\": container with ID starting with 180e2c10ea2886645a4dfde1732419123fed304db011ebf0e606c741b83af3fe not found: ID does not exist" Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:41.634701 2579 scope.go:115] "RemoveContainer" containerID="fa576909424de31254e1c4275c6fb0976b920ec554d33aefeb4a8e3f46464ffa" Jan 16 21:24:41 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:41.636197 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:fa576909424de31254e1c4275c6fb0976b920ec554d33aefeb4a8e3f46464ffa} err="failed to get container status \"fa576909424de31254e1c4275c6fb0976b920ec554d33aefeb4a8e3f46464ffa\": rpc error: code = NotFound desc = could not find container \"fa576909424de31254e1c4275c6fb0976b920ec554d33aefeb4a8e3f46464ffa\": container with ID starting with fa576909424de31254e1c4275c6fb0976b920ec554d33aefeb4a8e3f46464ffa not found: ID does not exist" Jan 16 21:24:42 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:24:42 api-int.lab.ocpipi.lan sudo[21134]: pam_unix(sudo:session): session closed for user root Jan 16 21:24:42 api-int.lab.ocpipi.lan systemd[1]: session-c28.scope: Deactivated successfully. Jan 16 21:24:43 api-int.lab.ocpipi.lan sudo[21200]: pam_unix(sudo:session): session closed for user root Jan 16 21:24:43 api-int.lab.ocpipi.lan systemd[1]: session-c35.scope: Deactivated successfully. Jan 16 21:24:43 api-int.lab.ocpipi.lan sudo[21130]: pam_unix(sudo:session): session closed for user root Jan 16 21:24:43 api-int.lab.ocpipi.lan systemd[1]: session-c27.scope: Deactivated successfully. Jan 16 21:24:43 api-int.lab.ocpipi.lan sudo[21126]: pam_unix(sudo:session): session closed for user root Jan 16 21:24:43 api-int.lab.ocpipi.lan systemd[1]: session-c25.scope: Deactivated successfully. Jan 16 21:24:43 api-int.lab.ocpipi.lan sudo[21122]: pam_unix(sudo:session): session closed for user root Jan 16 21:24:43 api-int.lab.ocpipi.lan systemd[1]: session-c26.scope: Deactivated successfully. Jan 16 21:24:43 api-int.lab.ocpipi.lan sudo[21164]: pam_unix(sudo:session): session closed for user root Jan 16 21:24:43 api-int.lab.ocpipi.lan systemd[1]: session-c33.scope: Deactivated successfully. Jan 16 21:24:43 api-int.lab.ocpipi.lan sudo[21253]: pam_unix(sudo:session): session closed for user root Jan 16 21:24:43 api-int.lab.ocpipi.lan systemd[1]: session-c46.scope: Deactivated successfully. Jan 16 21:24:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:43.849449 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 21:24:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:43.849988 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 21:24:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:43.850042 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 21:24:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:43.850133 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 21:24:43 api-int.lab.ocpipi.lan sudo[21179]: pam_unix(sudo:session): session closed for user root Jan 16 21:24:43 api-int.lab.ocpipi.lan systemd[1]: session-c34.scope: Deactivated successfully. Jan 16 21:24:43 api-int.lab.ocpipi.lan sudo[21147]: pam_unix(sudo:session): session closed for user root Jan 16 21:24:43 api-int.lab.ocpipi.lan sudo[21170]: pam_unix(sudo:session): session closed for user root Jan 16 21:24:43 api-int.lab.ocpipi.lan systemd[1]: session-c30.scope: Deactivated successfully. Jan 16 21:24:43 api-int.lab.ocpipi.lan systemd[1]: session-c37.scope: Deactivated successfully. Jan 16 21:24:44 api-int.lab.ocpipi.lan sudo[21230]: pam_unix(sudo:session): session closed for user root Jan 16 21:24:44 api-int.lab.ocpipi.lan systemd[1]: session-c43.scope: Deactivated successfully. Jan 16 21:24:44 api-int.lab.ocpipi.lan sudo[21235]: pam_unix(sudo:session): session closed for user root Jan 16 21:24:44 api-int.lab.ocpipi.lan sudo[21185]: pam_unix(sudo:session): session closed for user root Jan 16 21:24:44 api-int.lab.ocpipi.lan systemd[1]: session-c44.scope: Deactivated successfully. Jan 16 21:24:44 api-int.lab.ocpipi.lan systemd[1]: session-c38.scope: Deactivated successfully. Jan 16 21:24:44 api-int.lab.ocpipi.lan sudo[21206]: pam_unix(sudo:session): session closed for user root Jan 16 21:24:44 api-int.lab.ocpipi.lan systemd[1]: session-c40.scope: Deactivated successfully. Jan 16 21:24:44 api-int.lab.ocpipi.lan sudo[21140]: pam_unix(sudo:session): session closed for user root Jan 16 21:24:44 api-int.lab.ocpipi.lan systemd[1]: session-c29.scope: Deactivated successfully. Jan 16 21:24:44 api-int.lab.ocpipi.lan sudo[21225]: pam_unix(sudo:session): session closed for user root Jan 16 21:24:44 api-int.lab.ocpipi.lan systemd[1]: session-c45.scope: Deactivated successfully. Jan 16 21:24:44 api-int.lab.ocpipi.lan sudo[21261]: pam_unix(sudo:session): session closed for user root Jan 16 21:24:44 api-int.lab.ocpipi.lan systemd[1]: session-c47.scope: Deactivated successfully. Jan 16 21:24:44 api-int.lab.ocpipi.lan sudo[21210]: pam_unix(sudo:session): session closed for user root Jan 16 21:24:44 api-int.lab.ocpipi.lan systemd[1]: session-c41.scope: Deactivated successfully. Jan 16 21:24:44 api-int.lab.ocpipi.lan sudo[21219]: pam_unix(sudo:session): session closed for user root Jan 16 21:24:44 api-int.lab.ocpipi.lan sudo[21152]: pam_unix(sudo:session): session closed for user root Jan 16 21:24:44 api-int.lab.ocpipi.lan systemd[1]: session-c42.scope: Deactivated successfully. Jan 16 21:24:44 api-int.lab.ocpipi.lan systemd[1]: session-c31.scope: Deactivated successfully. Jan 16 21:24:44 api-int.lab.ocpipi.lan sudo[21214]: pam_unix(sudo:session): session closed for user root Jan 16 21:24:44 api-int.lab.ocpipi.lan systemd[1]: session-c39.scope: Deactivated successfully. Jan 16 21:24:44 api-int.lab.ocpipi.lan sudo[21175]: pam_unix(sudo:session): session closed for user root Jan 16 21:24:44 api-int.lab.ocpipi.lan systemd[1]: session-c36.scope: Deactivated successfully. Jan 16 21:24:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:44.637059605Z" level=info msg="Stopping pod sandbox: d33fcfd348cf2c3a24fa9aa431b71f77fb2351c6c87e2ab3eb4270f280959111" id=8f9bf0be-ea1f-457a-90c9-0d0d5741c5e0 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:24:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:44.637236319Z" level=info msg="Stopped pod sandbox (already stopped): d33fcfd348cf2c3a24fa9aa431b71f77fb2351c6c87e2ab3eb4270f280959111" id=8f9bf0be-ea1f-457a-90c9-0d0d5741c5e0 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:24:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:44.639875762Z" level=info msg="Removing pod sandbox: d33fcfd348cf2c3a24fa9aa431b71f77fb2351c6c87e2ab3eb4270f280959111" id=ca1ea6c8-1f2e-44c6-8161-a05c253642f1 name=/runtime.v1.RuntimeService/RemovePodSandbox Jan 16 21:24:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:44.645308906Z" level=info msg="Removed pod sandbox: d33fcfd348cf2c3a24fa9aa431b71f77fb2351c6c87e2ab3eb4270f280959111" id=ca1ea6c8-1f2e-44c6-8161-a05c253642f1 name=/runtime.v1.RuntimeService/RemovePodSandbox Jan 16 21:24:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:44.646462486Z" level=info msg="Stopping pod sandbox: 698793765b36d11ab57ecfa5b37f206a0c00f023a38088216f8e7b16931b26a4" id=87543b67-29d8-4a7a-85b0-777489e07fd7 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:24:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:44.646546245Z" level=info msg="Stopped pod sandbox (already stopped): 698793765b36d11ab57ecfa5b37f206a0c00f023a38088216f8e7b16931b26a4" id=87543b67-29d8-4a7a-85b0-777489e07fd7 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:24:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:44.647767712Z" level=info msg="Removing pod sandbox: 698793765b36d11ab57ecfa5b37f206a0c00f023a38088216f8e7b16931b26a4" id=3091a28a-5ea8-47f3-aa43-852ac30c35b0 name=/runtime.v1.RuntimeService/RemovePodSandbox Jan 16 21:24:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:44.652120186Z" level=info msg="Removed pod sandbox: 698793765b36d11ab57ecfa5b37f206a0c00f023a38088216f8e7b16931b26a4" id=3091a28a-5ea8-47f3-aa43-852ac30c35b0 name=/runtime.v1.RuntimeService/RemovePodSandbox Jan 16 21:24:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:44.653178783Z" level=info msg="Stopping pod sandbox: df7414d7cd6ee869f535f4228fa7b7b23f6ac8632001d1dce75dcc25250e3f1b" id=e731d8ff-3a83-42e5-8936-85345be3a25c name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:24:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:44.653243711Z" level=info msg="Stopped pod sandbox (already stopped): df7414d7cd6ee869f535f4228fa7b7b23f6ac8632001d1dce75dcc25250e3f1b" id=e731d8ff-3a83-42e5-8936-85345be3a25c name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:24:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:44.654314333Z" level=info msg="Removing pod sandbox: df7414d7cd6ee869f535f4228fa7b7b23f6ac8632001d1dce75dcc25250e3f1b" id=a09b2c21-f790-4d8e-b6e2-d84a000a42b1 name=/runtime.v1.RuntimeService/RemovePodSandbox Jan 16 21:24:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:44.660683526Z" level=info msg="Removed pod sandbox: df7414d7cd6ee869f535f4228fa7b7b23f6ac8632001d1dce75dcc25250e3f1b" id=a09b2c21-f790-4d8e-b6e2-d84a000a42b1 name=/runtime.v1.RuntimeService/RemovePodSandbox Jan 16 21:24:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:44.661536858Z" level=info msg="Stopping pod sandbox: f7a6f707f3bda3601ec15d7bd6975ac503d8a121077b16831d7ae849142883fd" id=22665171-8cb5-48b6-957b-0295f1da7a07 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:24:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:44.661601504Z" level=info msg="Stopped pod sandbox (already stopped): f7a6f707f3bda3601ec15d7bd6975ac503d8a121077b16831d7ae849142883fd" id=22665171-8cb5-48b6-957b-0295f1da7a07 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:24:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:44.662434161Z" level=info msg="Removing pod sandbox: f7a6f707f3bda3601ec15d7bd6975ac503d8a121077b16831d7ae849142883fd" id=75219553-e118-42cd-b9a5-c527e36aa75c name=/runtime.v1.RuntimeService/RemovePodSandbox Jan 16 21:24:44 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:24:44.666180803Z" level=info msg="Removed pod sandbox: f7a6f707f3bda3601ec15d7bd6975ac503d8a121077b16831d7ae849142883fd" id=75219553-e118-42cd-b9a5-c527e36aa75c name=/runtime.v1.RuntimeService/RemovePodSandbox Jan 16 21:24:44 api-int.lab.ocpipi.lan sudo[21159]: pam_unix(sudo:session): session closed for user root Jan 16 21:24:44 api-int.lab.ocpipi.lan systemd[1]: session-c32.scope: Deactivated successfully. Jan 16 21:24:46 api-int.lab.ocpipi.lan systemd[1]: bootkube.service: Scheduled restart job, restart counter is at 2. Jan 16 21:24:46 api-int.lab.ocpipi.lan systemd[1]: Stopped Bootstrap a Kubernetes cluster. Jan 16 21:24:46 api-int.lab.ocpipi.lan systemd[1]: bootkube.service: Consumed 29.682s CPU time. Jan 16 21:24:46 api-int.lab.ocpipi.lan systemd[1]: Started Bootstrap a Kubernetes cluster. Jan 16 21:24:48 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 1dbd72dd3521f998183975e4005d12ead6d097d9ba232140ab3beb2f2814a3e8. Jan 16 21:24:48 api-int.lab.ocpipi.lan systemd[1]: libpod-1dbd72dd3521f998183975e4005d12ead6d097d9ba232140ab3beb2f2814a3e8.scope: Deactivated successfully. Jan 16 21:24:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:48.807718 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:24:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:48.829492 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:24:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:48.830072 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:24:48 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:48.830146 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:24:48 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.9Pr5VO.mount: Deactivated successfully. Jan 16 21:24:49 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1dbd72dd3521f998183975e4005d12ead6d097d9ba232140ab3beb2f2814a3e8-userdata-shm.mount: Deactivated successfully. Jan 16 21:24:49 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-34f73a25172d9a8bd39d65705a7c03966d0e0f56e273ee4078facdcffb923bc3-merged.mount: Deactivated successfully. Jan 16 21:24:49 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 7ae6285e48fbe8f4c50c575c68da6f12eb7502f430ecbc962b2d73978208093c. Jan 16 21:24:50 api-int.lab.ocpipi.lan systemd[1]: libpod-7ae6285e48fbe8f4c50c575c68da6f12eb7502f430ecbc962b2d73978208093c.scope: Deactivated successfully. Jan 16 21:24:50 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7ae6285e48fbe8f4c50c575c68da6f12eb7502f430ecbc962b2d73978208093c-userdata-shm.mount: Deactivated successfully. Jan 16 21:24:50 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-48fcf184b9a0e9b69e5d1b1b2f917e6c6fb979142ff5cc015bd6e2f91b2ebcf7-merged.mount: Deactivated successfully. Jan 16 21:24:51 api-int.lab.ocpipi.lan approve-csr.sh[21695]: No resources found Jan 16 21:24:51 api-int.lab.ocpipi.lan systemd[1]: run-runc-281d5a7f355b7e3c4a68bef2da6cbbea28865b4b20fc0fadd79ef10eff763fb3-runc.TAUH2n.mount: Deactivated successfully. Jan 16 21:24:51 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 281d5a7f355b7e3c4a68bef2da6cbbea28865b4b20fc0fadd79ef10eff763fb3. Jan 16 21:24:51 api-int.lab.ocpipi.lan systemd[1]: libpod-281d5a7f355b7e3c4a68bef2da6cbbea28865b4b20fc0fadd79ef10eff763fb3.scope: Deactivated successfully. Jan 16 21:24:52 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-281d5a7f355b7e3c4a68bef2da6cbbea28865b4b20fc0fadd79ef10eff763fb3-userdata-shm.mount: Deactivated successfully. Jan 16 21:24:52 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-46b9c5af4c64d5fc75d6ff3951ca00b16f97ab234310009d3c3824384c2bd030-merged.mount: Deactivated successfully. Jan 16 21:24:52 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container d63e034ea73d8d701f527fd1b56a438c85463f590d53e42b6b203c72ab9a5ebc. Jan 16 21:24:53 api-int.lab.ocpipi.lan systemd[1]: libpod-d63e034ea73d8d701f527fd1b56a438c85463f590d53e42b6b203c72ab9a5ebc.scope: Deactivated successfully. Jan 16 21:24:53 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d63e034ea73d8d701f527fd1b56a438c85463f590d53e42b6b203c72ab9a5ebc-userdata-shm.mount: Deactivated successfully. Jan 16 21:24:53 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-b01a5e9e84a18dd84f37fea0697f1bf7fac06ecc6127518e431c1a39c1d24a06-merged.mount: Deactivated successfully. Jan 16 21:24:54 api-int.lab.ocpipi.lan systemd[1]: run-runc-ace43d57940777d1365e225f5b518263bcd9e08c2d2e26da5a2b85df0ba29758-runc.Tl2Pxk.mount: Deactivated successfully. Jan 16 21:24:54 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container ace43d57940777d1365e225f5b518263bcd9e08c2d2e26da5a2b85df0ba29758. Jan 16 21:24:54 api-int.lab.ocpipi.lan systemd[1]: libpod-ace43d57940777d1365e225f5b518263bcd9e08c2d2e26da5a2b85df0ba29758.scope: Deactivated successfully. Jan 16 21:24:54 api-int.lab.ocpipi.lan systemd[1]: Stopping User Manager for UID 0... Jan 16 21:24:54 api-int.lab.ocpipi.lan systemd[21157]: Activating special unit Exit the Session... Jan 16 21:24:54 api-int.lab.ocpipi.lan systemd[21157]: Stopped target Main User Target. Jan 16 21:24:54 api-int.lab.ocpipi.lan systemd[21157]: Stopped target Basic System. Jan 16 21:24:54 api-int.lab.ocpipi.lan systemd[21157]: Stopped target Paths. Jan 16 21:24:54 api-int.lab.ocpipi.lan systemd[21157]: Stopped target Sockets. Jan 16 21:24:54 api-int.lab.ocpipi.lan systemd[21157]: Stopped target Timers. Jan 16 21:24:54 api-int.lab.ocpipi.lan systemd[21157]: Stopped Daily Cleanup of User's Temporary Directories. Jan 16 21:24:54 api-int.lab.ocpipi.lan systemd[21157]: Closed D-Bus User Message Bus Socket. Jan 16 21:24:54 api-int.lab.ocpipi.lan systemd[21157]: Stopped Create User's Volatile Files and Directories. Jan 16 21:24:54 api-int.lab.ocpipi.lan systemd[21157]: Removed slice User Application Slice. Jan 16 21:24:54 api-int.lab.ocpipi.lan systemd[21157]: Reached target Shutdown. Jan 16 21:24:54 api-int.lab.ocpipi.lan systemd[21157]: Finished Exit the Session. Jan 16 21:24:54 api-int.lab.ocpipi.lan systemd[21157]: Reached target Exit the Session. Jan 16 21:24:55 api-int.lab.ocpipi.lan systemd[1]: user@0.service: Deactivated successfully. Jan 16 21:24:55 api-int.lab.ocpipi.lan systemd[1]: Stopped User Manager for UID 0. Jan 16 21:24:55 api-int.lab.ocpipi.lan systemd[1]: Stopping User Runtime Directory /run/user/0... Jan 16 21:24:55 api-int.lab.ocpipi.lan systemd[1]: run-user-0.mount: Deactivated successfully. Jan 16 21:24:55 api-int.lab.ocpipi.lan systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Jan 16 21:24:55 api-int.lab.ocpipi.lan systemd[1]: Stopped User Runtime Directory /run/user/0. Jan 16 21:24:55 api-int.lab.ocpipi.lan systemd[1]: Removed slice User Slice of UID 0. Jan 16 21:24:55 api-int.lab.ocpipi.lan systemd[1]: user-0.slice: Consumed 11.340s CPU time. Jan 16 21:24:55 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ace43d57940777d1365e225f5b518263bcd9e08c2d2e26da5a2b85df0ba29758-userdata-shm.mount: Deactivated successfully. Jan 16 21:24:55 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-6f4615d50a1a02aac9d05e9edbd30b403d6bf6ef81fca0b581252c37191408e2-merged.mount: Deactivated successfully. Jan 16 21:24:55 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 1128b9ae0a5c1d4ab695e7a7c8d9df66a1411e257516f5a5945bcb45d65c1007. Jan 16 21:24:56 api-int.lab.ocpipi.lan systemd[1]: libpod-1128b9ae0a5c1d4ab695e7a7c8d9df66a1411e257516f5a5945bcb45d65c1007.scope: Deactivated successfully. Jan 16 21:24:56 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1128b9ae0a5c1d4ab695e7a7c8d9df66a1411e257516f5a5945bcb45d65c1007-userdata-shm.mount: Deactivated successfully. Jan 16 21:24:56 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-9c7dd516a387d8f8646014f21a3d9d94ca2a5567d4b52b6040149db00cb7e8a6-merged.mount: Deactivated successfully. Jan 16 21:24:57 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 208d72466cd35cac74c824db902bcfbfc879575099341c47f030cec89686dcb3. Jan 16 21:24:57 api-int.lab.ocpipi.lan systemd[1]: libpod-208d72466cd35cac74c824db902bcfbfc879575099341c47f030cec89686dcb3.scope: Deactivated successfully. Jan 16 21:24:58 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-208d72466cd35cac74c824db902bcfbfc879575099341c47f030cec89686dcb3-userdata-shm.mount: Deactivated successfully. Jan 16 21:24:58 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-1d853df75474a9fc698263a80d93b3d1387d26e40476e17879d8d61f57c78a41-merged.mount: Deactivated successfully. Jan 16 21:24:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:58.881245 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:24:58 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 75c2423377aa79024284d8cc1e807f71e4fa1d4a0cb0c66985a78a97e6a30d2c. Jan 16 21:24:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:58.894753 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:24:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:58.895556 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:24:58 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:24:58.896385 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:24:59 api-int.lab.ocpipi.lan systemd[1]: libpod-75c2423377aa79024284d8cc1e807f71e4fa1d4a0cb0c66985a78a97e6a30d2c.scope: Deactivated successfully. Jan 16 21:24:59 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-75c2423377aa79024284d8cc1e807f71e4fa1d4a0cb0c66985a78a97e6a30d2c-userdata-shm.mount: Deactivated successfully. Jan 16 21:24:59 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-4402769e69d3fe69a6b3ba85361c433adfe59ce36d0c382cfc73c3714f20818d-merged.mount: Deactivated successfully. Jan 16 21:25:00 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container bc1bf4fd220198bd5d563f2dd10f35962462b6bb5a236c4c80946c8f7caf2928. Jan 16 21:25:00 api-int.lab.ocpipi.lan systemd[1]: libpod-bc1bf4fd220198bd5d563f2dd10f35962462b6bb5a236c4c80946c8f7caf2928.scope: Deactivated successfully. Jan 16 21:25:01 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bc1bf4fd220198bd5d563f2dd10f35962462b6bb5a236c4c80946c8f7caf2928-userdata-shm.mount: Deactivated successfully. Jan 16 21:25:01 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-7a7fcd330ed1290910ae4c61907871a849e61f1acfcaaa217fde6dd9dc74282f-merged.mount: Deactivated successfully. Jan 16 21:25:01 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 2fdbc92f6e795166b917955d91b90da7b27dfb5f649c47c0fb1dbae14e10304f. Jan 16 21:25:02 api-int.lab.ocpipi.lan systemd[1]: libpod-2fdbc92f6e795166b917955d91b90da7b27dfb5f649c47c0fb1dbae14e10304f.scope: Deactivated successfully. Jan 16 21:25:02 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:25:02 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2fdbc92f6e795166b917955d91b90da7b27dfb5f649c47c0fb1dbae14e10304f-userdata-shm.mount: Deactivated successfully. Jan 16 21:25:02 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-9b84df2cce48b2800bf55c5dc0d4a70775ca86dcebcec92da43d0ac7b79ff4e6-merged.mount: Deactivated successfully. Jan 16 21:25:03 api-int.lab.ocpipi.lan systemd[1]: run-runc-c4fcbba10ce7a72f2fdde00b4966d77007a6b5852c36c5340a0271bf88af1218-runc.h3rdqX.mount: Deactivated successfully. Jan 16 21:25:03 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container c4fcbba10ce7a72f2fdde00b4966d77007a6b5852c36c5340a0271bf88af1218. Jan 16 21:25:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:03.467735 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:03.475787 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:03.476248 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:03 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:03.476312 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:03 api-int.lab.ocpipi.lan systemd[1]: libpod-c4fcbba10ce7a72f2fdde00b4966d77007a6b5852c36c5340a0271bf88af1218.scope: Deactivated successfully. Jan 16 21:25:04 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c4fcbba10ce7a72f2fdde00b4966d77007a6b5852c36c5340a0271bf88af1218-userdata-shm.mount: Deactivated successfully. Jan 16 21:25:04 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-20f012894db119bc780826cdf02994cfe52401d8e97257008f4e795f7c610b89-merged.mount: Deactivated successfully. Jan 16 21:25:04 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 696682a1209a9dfce5a29cc2a65db4eb8c263cd37d3ffe74a0b6ad4822cb02ec. Jan 16 21:25:05 api-int.lab.ocpipi.lan systemd[1]: libpod-696682a1209a9dfce5a29cc2a65db4eb8c263cd37d3ffe74a0b6ad4822cb02ec.scope: Deactivated successfully. Jan 16 21:25:05 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-696682a1209a9dfce5a29cc2a65db4eb8c263cd37d3ffe74a0b6ad4822cb02ec-userdata-shm.mount: Deactivated successfully. Jan 16 21:25:05 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-5e61a7e0dbc5181afe81d402a8bc3972dc2b90a8c5bb737b3710eff95ff108b6-merged.mount: Deactivated successfully. Jan 16 21:25:06 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 58d1c1643c109748454226f5cfffb1a9a084e1c35f1d635764f71e3237da76d0. Jan 16 21:25:06 api-int.lab.ocpipi.lan systemd[1]: libpod-58d1c1643c109748454226f5cfffb1a9a084e1c35f1d635764f71e3237da76d0.scope: Deactivated successfully. Jan 16 21:25:06 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-58d1c1643c109748454226f5cfffb1a9a084e1c35f1d635764f71e3237da76d0-userdata-shm.mount: Deactivated successfully. Jan 16 21:25:07 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-a656d035e01a838f66de43e4058b332047ca5d362c709add25053e33aa5fafb8-merged.mount: Deactivated successfully. Jan 16 21:25:07 api-int.lab.ocpipi.lan systemd[1]: run-runc-11c19ab44d62538f49a784a4af68c0d1c78a3ebb133a437f5b38022320a1ec39-runc.wV0XYK.mount: Deactivated successfully. Jan 16 21:25:07 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 11c19ab44d62538f49a784a4af68c0d1c78a3ebb133a437f5b38022320a1ec39. Jan 16 21:25:08 api-int.lab.ocpipi.lan systemd[1]: libpod-11c19ab44d62538f49a784a4af68c0d1c78a3ebb133a437f5b38022320a1ec39.scope: Deactivated successfully. Jan 16 21:25:08 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-11c19ab44d62538f49a784a4af68c0d1c78a3ebb133a437f5b38022320a1ec39-userdata-shm.mount: Deactivated successfully. Jan 16 21:25:08 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-29df6cc9439f206170f2b905ff0db1128da32d5cb4e28cca5f0a087bf850ee53-merged.mount: Deactivated successfully. Jan 16 21:25:08 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:08.943431 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:08 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:08.958596 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:08 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:08.959576 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:08 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:08.959756 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:09 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 886174852b5a760263369563e6b02a8301eddf47957d369b19a06fe823f3d542. Jan 16 21:25:09 api-int.lab.ocpipi.lan systemd[1]: libpod-886174852b5a760263369563e6b02a8301eddf47957d369b19a06fe823f3d542.scope: Deactivated successfully. Jan 16 21:25:10 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-886174852b5a760263369563e6b02a8301eddf47957d369b19a06fe823f3d542-userdata-shm.mount: Deactivated successfully. Jan 16 21:25:10 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-490b6b6e0fcb12918bebebe35458297be28cb44e3f4e36a994157fc15e45874b-merged.mount: Deactivated successfully. Jan 16 21:25:10 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:10.468396 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:10 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:10.479537 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:10 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:10.479725 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:10 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:10.479777 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:12 api-int.lab.ocpipi.lan approve-csr.sh[22713]: No resources found Jan 16 21:25:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:14.468525 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:14.482590 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:14.488256 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:14 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:14.488459 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:14 api-int.lab.ocpipi.lan bootkube.sh[21513]: Check if API and API-Int URLs are resolvable during bootstrap Jan 16 21:25:14 api-int.lab.ocpipi.lan bootkube.sh[21513]: Checking if api.lab.ocpipi.lan of type API_URL is resolvable Jan 16 21:25:14 api-int.lab.ocpipi.lan bootkube.sh[21513]: Starting stage resolve-api-url Jan 16 21:25:15 api-int.lab.ocpipi.lan bootkube.sh[21513]: Successfully resolved API_URL api.lab.ocpipi.lan Jan 16 21:25:15 api-int.lab.ocpipi.lan bootkube.sh[21513]: Checking if api-int.lab.ocpipi.lan of type API_INT_URL is resolvable Jan 16 21:25:15 api-int.lab.ocpipi.lan bootkube.sh[21513]: Starting stage resolve-api-int-url Jan 16 21:25:15 api-int.lab.ocpipi.lan bootkube.sh[21513]: Successfully resolved API_INT_URL api-int.lab.ocpipi.lan Jan 16 21:25:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:16.467441 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:16.474128 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:16.474536 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:16 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:16.474705 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:16 api-int.lab.ocpipi.lan systemd[1]: run-runc-01b9a1dbae7f64d68ab4adb3f108667058bbde467fbf315f2b565cd0da4bf76e-runc.jWDarm.mount: Deactivated successfully. Jan 16 21:25:16 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 01b9a1dbae7f64d68ab4adb3f108667058bbde467fbf315f2b565cd0da4bf76e. Jan 16 21:25:17 api-int.lab.ocpipi.lan bootkube.sh[22774]: https://localhost:2379 is healthy: successfully committed proposal: took = 108.499972ms Jan 16 21:25:17 api-int.lab.ocpipi.lan systemd[1]: libpod-01b9a1dbae7f64d68ab4adb3f108667058bbde467fbf315f2b565cd0da4bf76e.scope: Deactivated successfully. Jan 16 21:25:18 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-01b9a1dbae7f64d68ab4adb3f108667058bbde467fbf315f2b565cd0da4bf76e-userdata-shm.mount: Deactivated successfully. Jan 16 21:25:18 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-f767f510f9fd7f8f7793a7b9896dabc4116a247c6233799a5756056b10a43770-merged.mount: Deactivated successfully. Jan 16 21:25:18 api-int.lab.ocpipi.lan bootkube.sh[21513]: Starting cluster-bootstrap... Jan 16 21:25:18 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.EEYVVo.mount: Deactivated successfully. Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.076143 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:19 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 566fda088ef07a7e2a11096e3a7fd493718b565d32df724e8231a1ae88c2b582. Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.096421 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.096565 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.096621 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:19 api-int.lab.ocpipi.lan bootkube.sh[22872]: Starting temporary bootstrap control plane... Jan 16 21:25:19 api-int.lab.ocpipi.lan bootkube.sh[22872]: Waiting up to 20m0s for the Kubernetes API Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.682489 2579 kubelet.go:2425] "SyncLoop ADD" source="file" pods=[openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain] Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.685644 2579 topology_manager.go:212] "Topology Admit Handler" podUID=05c96ce8daffad47cf2b15e2a67753ec podNamespace="openshift-cluster-version" podName="bootstrap-cluster-version-operator-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:25:19.686394 2579 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b8b0f2012ce2b145220be181d7a5aa55" containerName="kube-scheduler" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.686573 2579 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8b0f2012ce2b145220be181d7a5aa55" containerName="kube-scheduler" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:25:19.686637 2579 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="1cb3be1f2df5273e9b77f7050777bcbe" containerName="kube-apiserver" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.686671 2579 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb3be1f2df5273e9b77f7050777bcbe" containerName="kube-apiserver" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:25:19.686703 2579 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="1cb3be1f2df5273e9b77f7050777bcbe" containerName="kube-apiserver-insecure-readyz" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.686742 2579 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb3be1f2df5273e9b77f7050777bcbe" containerName="kube-apiserver-insecure-readyz" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:25:19.686773 2579 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="c3db590e56a311b869092b2d6b1724e5" containerName="kube-controller-manager" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.686908 2579 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3db590e56a311b869092b2d6b1724e5" containerName="kube-controller-manager" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:25:19.687151 2579 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b8b0f2012ce2b145220be181d7a5aa55" containerName="kube-scheduler" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.687183 2579 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8b0f2012ce2b145220be181d7a5aa55" containerName="kube-scheduler" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:25:19.687215 2579 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="a6238b9f1f3a2f2bd2b4b1b0c7962bdd" containerName="cloud-credential-operator" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.687243 2579 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6238b9f1f3a2f2bd2b4b1b0c7962bdd" containerName="cloud-credential-operator" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:25:19.687273 2579 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="c3db590e56a311b869092b2d6b1724e5" containerName="cluster-policy-controller" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.687300 2579 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3db590e56a311b869092b2d6b1724e5" containerName="cluster-policy-controller" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:25:19.687338 2579 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="1cb3be1f2df5273e9b77f7050777bcbe" containerName="setup" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.687367 2579 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb3be1f2df5273e9b77f7050777bcbe" containerName="setup" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.687560 2579 memory_manager.go:346] "RemoveStaleState removing state" podUID="1cb3be1f2df5273e9b77f7050777bcbe" containerName="kube-apiserver" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.687603 2579 memory_manager.go:346] "RemoveStaleState removing state" podUID="b8b0f2012ce2b145220be181d7a5aa55" containerName="kube-scheduler" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.687633 2579 memory_manager.go:346] "RemoveStaleState removing state" podUID="c3db590e56a311b869092b2d6b1724e5" containerName="cluster-policy-controller" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.687662 2579 memory_manager.go:346] "RemoveStaleState removing state" podUID="c3db590e56a311b869092b2d6b1724e5" containerName="kube-controller-manager" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.687692 2579 memory_manager.go:346] "RemoveStaleState removing state" podUID="c3db590e56a311b869092b2d6b1724e5" containerName="kube-controller-manager" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.687728 2579 memory_manager.go:346] "RemoveStaleState removing state" podUID="1cb3be1f2df5273e9b77f7050777bcbe" containerName="kube-apiserver-insecure-readyz" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.687776 2579 memory_manager.go:346] "RemoveStaleState removing state" podUID="a6238b9f1f3a2f2bd2b4b1b0c7962bdd" containerName="cloud-credential-operator" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.688126 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.710554 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.710796 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.711132 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.713379 2579 kubelet.go:2425] "SyncLoop ADD" source="file" pods=[openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain] Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.713555 2579 topology_manager.go:212] "Topology Admit Handler" podUID=a6238b9f1f3a2f2bd2b4b1b0c7962bdd podNamespace="openshift-cloud-credential-operator" podName="cloud-credential-operator-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:25:19.714108 2579 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="c3db590e56a311b869092b2d6b1724e5" containerName="kube-controller-manager" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.714166 2579 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3db590e56a311b869092b2d6b1724e5" containerName="kube-controller-manager" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.715265 2579 memory_manager.go:346] "RemoveStaleState removing state" podUID="b8b0f2012ce2b145220be181d7a5aa55" containerName="kube-scheduler" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.715444 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.726591 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.726781 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.727100 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.728091 2579 kubelet.go:2425] "SyncLoop ADD" source="file" pods=[openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain] Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.736298 2579 kubelet.go:2425] "SyncLoop ADD" source="file" pods=[kube-system/bootstrap-kube-controller-manager-localhost.localdomain] Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.736410 2579 topology_manager.go:212] "Topology Admit Handler" podUID=c3db590e56a311b869092b2d6b1724e5 podNamespace="kube-system" podName="bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.736628 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.743288 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.743477 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.743532 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.745678 2579 kubelet.go:2425] "SyncLoop ADD" source="file" pods=[kube-system/bootstrap-kube-scheduler-localhost.localdomain] Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.746114 2579 topology_manager.go:212] "Topology Admit Handler" podUID=b8b0f2012ce2b145220be181d7a5aa55 podNamespace="kube-system" podName="bootstrap-kube-scheduler-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.746294 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.755250 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.755440 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.755494 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:19 api-int.lab.ocpipi.lan systemd[1]: Created slice libcontainer container kubepods-besteffort-pod05c96ce8daffad47cf2b15e2a67753ec.slice. Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.821669 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.827646 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.828512 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.829406 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-etc-ssl-certs\") pod \"bootstrap-cluster-version-operator-localhost.localdomain\" (UID: \"05c96ce8daffad47cf2b15e2a67753ec\") " pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.830118 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-secrets\") pod \"bootstrap-kube-scheduler-localhost.localdomain\" (UID: \"b8b0f2012ce2b145220be181d7a5aa55\") " pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.830247 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.830351 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-logs\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.830441 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-logs\") pod \"bootstrap-kube-scheduler-localhost.localdomain\" (UID: \"b8b0f2012ce2b145220be181d7a5aa55\") " pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.830564 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.830673 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-secrets\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.830773 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a6238b9f1f3a2f2bd2b4b1b0c7962bdd-secrets\") pod \"cloud-credential-operator-localhost.localdomain\" (UID: \"a6238b9f1f3a2f2bd2b4b1b0c7962bdd\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.831150 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-config\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.831255 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-kubeconfig\") pod \"bootstrap-cluster-version-operator-localhost.localdomain\" (UID: \"05c96ce8daffad47cf2b15e2a67753ec\") " pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.831441 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:19 api-int.lab.ocpipi.lan systemd[1]: Created slice libcontainer container kubepods-besteffort-poda6238b9f1f3a2f2bd2b4b1b0c7962bdd.slice. Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.902528 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.912702 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.913091 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.913163 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.932227 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-secrets\") pod \"bootstrap-kube-scheduler-localhost.localdomain\" (UID: \"b8b0f2012ce2b145220be181d7a5aa55\") " pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.932375 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.932468 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-logs\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.932584 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-logs\") pod \"bootstrap-kube-scheduler-localhost.localdomain\" (UID: \"b8b0f2012ce2b145220be181d7a5aa55\") " pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.932689 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.932727 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.932786 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-secrets\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.933201 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.933228 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a6238b9f1f3a2f2bd2b4b1b0c7962bdd-secrets\") pod \"cloud-credential-operator-localhost.localdomain\" (UID: \"a6238b9f1f3a2f2bd2b4b1b0c7962bdd\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.933105 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-logs\") pod \"bootstrap-kube-scheduler-localhost.localdomain\" (UID: \"b8b0f2012ce2b145220be181d7a5aa55\") " pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.933323 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-secrets\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.933355 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a6238b9f1f3a2f2bd2b4b1b0c7962bdd-secrets\") pod \"cloud-credential-operator-localhost.localdomain\" (UID: \"a6238b9f1f3a2f2bd2b4b1b0c7962bdd\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.932740 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-logs\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.933660 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-config\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.933746 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/b8b0f2012ce2b145220be181d7a5aa55-secrets\") pod \"bootstrap-kube-scheduler-localhost.localdomain\" (UID: \"b8b0f2012ce2b145220be181d7a5aa55\") " pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.934098 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-kubeconfig\") pod \"bootstrap-cluster-version-operator-localhost.localdomain\" (UID: \"05c96ce8daffad47cf2b15e2a67753ec\") " pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.934171 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c3db590e56a311b869092b2d6b1724e5-config\") pod \"bootstrap-kube-controller-manager-localhost.localdomain\" (UID: \"c3db590e56a311b869092b2d6b1724e5\") " pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.933759 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-kubeconfig\") pod \"bootstrap-cluster-version-operator-localhost.localdomain\" (UID: \"05c96ce8daffad47cf2b15e2a67753ec\") " pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.934615 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-etc-ssl-certs\") pod \"bootstrap-cluster-version-operator-localhost.localdomain\" (UID: \"05c96ce8daffad47cf2b15e2a67753ec\") " pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.934765 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/05c96ce8daffad47cf2b15e2a67753ec-etc-ssl-certs\") pod \"bootstrap-cluster-version-operator-localhost.localdomain\" (UID: \"05c96ce8daffad47cf2b15e2a67753ec\") " pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan systemd[1]: Created slice libcontainer container kubepods-burstable-podc3db590e56a311b869092b2d6b1724e5.slice. Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.964524 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.970174 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.970372 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.970431 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:19 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:19.973185 2579 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:25:19 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:19.977758320Z" level=info msg="Running pod sandbox: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/POD" id=fad910c3-a053-4885-83df-760196cc12d4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 21:25:19 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:19.979757207Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:25:19 api-int.lab.ocpipi.lan systemd[1]: Created slice libcontainer container kubepods-burstable-podb8b0f2012ce2b145220be181d7a5aa55.slice. Jan 16 21:25:20 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:20.011501 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:20 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:20.016271 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:20 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:20.017332 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:20 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:20.017503 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:20 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:20.018389 2579 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.034576380Z" level=info msg="Running pod sandbox: kube-system/bootstrap-kube-scheduler-localhost.localdomain/POD" id=f55eb075-46a7-47ec-8d44-2c8d952d1ece name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.035094645Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.104193545Z" level=info msg="Ran pod sandbox 84a9a9fdb935fdf56b1aa2684295dfb71d8adb68ba31b934ad4cad7e6c1a23d6 with infra container: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/POD" id=fad910c3-a053-4885-83df-760196cc12d4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.128133682Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=3aeb0f54-1f74-410e-b897-2a57fea48990 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.131271444Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=3aeb0f54-1f74-410e-b897-2a57fea48990 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:20 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:20.139780 2579 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.143375210Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=3507608f-6c40-4fda-aae4-0cf3968e830b name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.144383966Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=3507608f-6c40-4fda-aae4-0cf3968e830b name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:20 api-int.lab.ocpipi.lan kubelet.sh[2579]: W0116 21:25:20.144102 2579 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8b0f2012ce2b145220be181d7a5aa55.slice/crio-8b1cd37808edbb707d3022fa2253d889b3f4d83b84195201205430bd08259063 WatchSource:0}: Error finding container 8b1cd37808edbb707d3022fa2253d889b3f4d83b84195201205430bd08259063: Status 404 returned error can't find the container with id 8b1cd37808edbb707d3022fa2253d889b3f4d83b84195201205430bd08259063 Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.145674361Z" level=info msg="Running pod sandbox: openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain/POD" id=f6956be8-5618-4266-8752-e4f317d1943d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.146228271Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.194790680Z" level=info msg="Creating container: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/kube-controller-manager" id=ae0f157f-bb69-482a-a3ed-985a1a9faa3d name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.202296601Z" level=info msg="Ran pod sandbox 8b1cd37808edbb707d3022fa2253d889b3f4d83b84195201205430bd08259063 with infra container: kube-system/bootstrap-kube-scheduler-localhost.localdomain/POD" id=f55eb075-46a7-47ec-8d44-2c8d952d1ece name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.206706394Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:25:20 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:20.221319 2579 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.225608820Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=577344b0-a351-498f-b190-b773095ac3a1 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.226515833Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=577344b0-a351-498f-b190-b773095ac3a1 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.228568012Z" level=info msg="Running pod sandbox: openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain/POD" id=92864f0b-8ff3-4448-a17e-b317bb1afea6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.228786086Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=5653ff3d-da9f-45c7-a93f-303d8bdd56f8 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.228902437Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.243912418Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=5653ff3d-da9f-45c7-a93f-303d8bdd56f8 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.251434892Z" level=info msg="Creating container: kube-system/bootstrap-kube-scheduler-localhost.localdomain/kube-scheduler" id=45ebd8ca-b2ff-4c50-a59f-44106661a056 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.251736164Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:25:20 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:20.265452 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" event=&{ID:c3db590e56a311b869092b2d6b1724e5 Type:ContainerStarted Data:84a9a9fdb935fdf56b1aa2684295dfb71d8adb68ba31b934ad4cad7e6c1a23d6} Jan 16 21:25:20 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:20.273086 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" event=&{ID:b8b0f2012ce2b145220be181d7a5aa55 Type:ContainerStarted Data:8b1cd37808edbb707d3022fa2253d889b3f4d83b84195201205430bd08259063} Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.279399524Z" level=info msg="Ran pod sandbox 3bdeb1297f2bd1ed9ad5b9b094efc69e4b41178913a37eb79f0ca9c388fc9c11 with infra container: openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain/POD" id=f6956be8-5618-4266-8752-e4f317d1943d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.294692283Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa" id=32507246-7dd9-40bf-97ba-306ff6cf6107 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.298162294Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:40e15091a793905eb63a02d951105fc5c5904bfb294f8004c052ac950c9ac44a,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa],Size_:522846560,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=32507246-7dd9-40bf-97ba-306ff6cf6107 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:20 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:20.300530 2579 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 16 21:25:20 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:20.301493 2579 provider.go:82] Docker config file not found: couldn't find valid .dockercfg after checking in [/var/lib/kubelet /] Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.303285683Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa" id=f91c9964-785a-4138-93ee-8081e122606d name=/runtime.v1.ImageService/PullImage Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.314487011Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa\"" Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.344230571Z" level=info msg="Ran pod sandbox 5f6efe5d237d4bd39536b1922909fcb4def1e85783bb9f2ea13f860ba2f28bfa with infra container: openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain/POD" id=92864f0b-8ff3-4448-a17e-b317bb1afea6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.358662276Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1cccbc92c83dd170dea8cb72a09e96facba21f3fdf5e3dd3f3009796c481cd67" id=19747a4b-ee74-4f8c-a3b1-7be6be401e3a name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.359734264Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:90bdc1613647030f9fe768ad330e8ff0dca1cc04bf002dc32974238943125b9c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1cccbc92c83dd170dea8cb72a09e96facba21f3fdf5e3dd3f3009796c481cd67],Size_:704416475,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=19747a4b-ee74-4f8c-a3b1-7be6be401e3a name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.368549372Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1cccbc92c83dd170dea8cb72a09e96facba21f3fdf5e3dd3f3009796c481cd67" id=df70584e-aa1c-4ea7-a3ba-988001f7f52d name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.370111130Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:90bdc1613647030f9fe768ad330e8ff0dca1cc04bf002dc32974238943125b9c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1cccbc92c83dd170dea8cb72a09e96facba21f3fdf5e3dd3f3009796c481cd67],Size_:704416475,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=df70584e-aa1c-4ea7-a3ba-988001f7f52d name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.374415332Z" level=info msg="Creating container: openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain/cloud-credential-operator" id=6c7274f4-84bd-4e57-a333-044d8d96bc82 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:25:20 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:20.376328207Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:25:20 api-int.lab.ocpipi.lan bootkube.sh[22872]: API is up Jan 16 21:25:21 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_00_cluster-version-operator_00_namespace.yaml" namespaces.v1./openshift-cluster-version -n as it already exists Jan 16 21:25:21 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_00_cluster-version-operator_01_adminack_configmap.yaml" configmaps.v1./admin-acks -n openshift-config as it already exists Jan 16 21:25:21 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_00_cluster-version-operator_01_admingate_configmap.yaml" configmaps.v1./admin-gates -n openshift-config-managed as it already exists Jan 16 21:25:21 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_00_cluster-version-operator_01_clusteroperator.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/clusteroperators.config.openshift.io -n as it already exists Jan 16 21:25:21 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_00_cluster-version-operator_01_clusterversion.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/clusterversions.config.openshift.io -n as it already exists Jan 16 21:25:21 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_00_cluster-version-operator_02_roles.yaml" clusterrolebindings.v1.rbac.authorization.k8s.io/cluster-version-operator -n as it already exists Jan 16 21:25:21 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_00_cluster-version-operator_03_deployment.yaml" deployments.v1.apps/cluster-version-operator -n openshift-cluster-version as it already exists Jan 16 21:25:21 api-int.lab.ocpipi.lan systemd[1]: Started crio-conmon-ee5caea6024f9ae3c4f59f9f40d4339c62ef96606b5351eed3dfffb489236f21.scope. Jan 16 21:25:21 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_00_namespace-openshift-infra.yaml" namespaces.v1./openshift-infra -n as it already exists Jan 16 21:25:21 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_03_authorization-openshift_01_rolebindingrestriction.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/rolebindingrestrictions.authorization.openshift.io -n as it already exists Jan 16 21:25:21 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_03_config-operator_01_proxy.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/proxies.config.openshift.io -n as it already exists Jan 16 21:25:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:21.286451 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" event=&{ID:a6238b9f1f3a2f2bd2b4b1b0c7962bdd Type:ContainerStarted Data:5f6efe5d237d4bd39536b1922909fcb4def1e85783bb9f2ea13f860ba2f28bfa} Jan 16 21:25:21 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:21.298238 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" event=&{ID:05c96ce8daffad47cf2b15e2a67753ec Type:ContainerStarted Data:3bdeb1297f2bd1ed9ad5b9b094efc69e4b41178913a37eb79f0ca9c388fc9c11} Jan 16 21:25:21 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container ee5caea6024f9ae3c4f59f9f40d4339c62ef96606b5351eed3dfffb489236f21. Jan 16 21:25:21 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_03_quota-openshift_01_clusterresourcequota.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/clusterresourcequotas.quota.openshift.io -n as it already exists Jan 16 21:25:21 api-int.lab.ocpipi.lan systemd[1]: Started crio-conmon-f0081ab2100c5e9a7477538e5d8f42f7fe2c8977201da22b8ab342be078a3d1e.scope. Jan 16 21:25:21 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container f0081ab2100c5e9a7477538e5d8f42f7fe2c8977201da22b8ab342be078a3d1e. Jan 16 21:25:21 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:21.543433740Z" level=info msg="Created container ee5caea6024f9ae3c4f59f9f40d4339c62ef96606b5351eed3dfffb489236f21: kube-system/bootstrap-kube-scheduler-localhost.localdomain/kube-scheduler" id=45ebd8ca-b2ff-4c50-a59f-44106661a056 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:25:21 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:21.549591293Z" level=info msg="Starting container: ee5caea6024f9ae3c4f59f9f40d4339c62ef96606b5351eed3dfffb489236f21" id=9076d5e7-683c-453d-a342-14e2b111b447 name=/runtime.v1.RuntimeService/StartContainer Jan 16 21:25:21 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_03_security-openshift_01_scc.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/securitycontextconstraints.security.openshift.io -n as it already exists Jan 16 21:25:21 api-int.lab.ocpipi.lan systemd[1]: Started crio-conmon-a3f40f99ca7355409bf423aaf103bd4225adf3da5fb0e7fce21d850758e130bc.scope. Jan 16 21:25:21 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:21.608349249Z" level=info msg="Started container" PID=22954 containerID=ee5caea6024f9ae3c4f59f9f40d4339c62ef96606b5351eed3dfffb489236f21 description=kube-system/bootstrap-kube-scheduler-localhost.localdomain/kube-scheduler id=9076d5e7-683c-453d-a342-14e2b111b447 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8b1cd37808edbb707d3022fa2253d889b3f4d83b84195201205430bd08259063 Jan 16 21:25:21 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:21.632121492Z" level=info msg="Created container f0081ab2100c5e9a7477538e5d8f42f7fe2c8977201da22b8ab342be078a3d1e: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/kube-controller-manager" id=ae0f157f-bb69-482a-a3ed-985a1a9faa3d name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:25:21 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:21.634346974Z" level=info msg="Starting container: f0081ab2100c5e9a7477538e5d8f42f7fe2c8977201da22b8ab342be078a3d1e" id=e5dc7e31-26ee-4b45-a787-2b9d0f017c66 name=/runtime.v1.RuntimeService/StartContainer Jan 16 21:25:21 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container a3f40f99ca7355409bf423aaf103bd4225adf3da5fb0e7fce21d850758e130bc. Jan 16 21:25:21 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:21.708290027Z" level=info msg="Started container" PID=22978 containerID=f0081ab2100c5e9a7477538e5d8f42f7fe2c8977201da22b8ab342be078a3d1e description=kube-system/bootstrap-kube-controller-manager-localhost.localdomain/kube-controller-manager id=e5dc7e31-26ee-4b45-a787-2b9d0f017c66 name=/runtime.v1.RuntimeService/StartContainer sandboxID=84a9a9fdb935fdf56b1aa2684295dfb71d8adb68ba31b934ad4cad7e6c1a23d6 Jan 16 21:25:21 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_03_securityinternal-openshift_02_rangeallocation.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/rangeallocations.security.internal.openshift.io -n as it already exists Jan 16 21:25:21 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:21.793398275Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:384b0d1665ce12136ede1c708c4542d12eac1f788528f8bc77cb52d871057437" id=ddfde6de-7955-4f4f-a715-d15f0d341a71 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:21 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:21.794054314Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c6ce09d75120c7c75b95c587ffc4a7a3f18cc099961eab2583e449102365e5b0,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:384b0d1665ce12136ede1c708c4542d12eac1f788528f8bc77cb52d871057437],Size_:535546139,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=ddfde6de-7955-4f4f-a715-d15f0d341a71 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:21 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:21.795444479Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:384b0d1665ce12136ede1c708c4542d12eac1f788528f8bc77cb52d871057437" id=baf4b6b1-1066-4730-b0b6-261f01ed97f3 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:21 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:21.795890664Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c6ce09d75120c7c75b95c587ffc4a7a3f18cc099961eab2583e449102365e5b0,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:384b0d1665ce12136ede1c708c4542d12eac1f788528f8bc77cb52d871057437],Size_:535546139,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=baf4b6b1-1066-4730-b0b6-261f01ed97f3 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:21 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:21.798502046Z" level=info msg="Creating container: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/cluster-policy-controller" id=4a62e1f3-1463-4c6c-8082-b26e32b862c1 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:25:21 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:21.798747725Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:25:21 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:21.820119266Z" level=info msg="Created container a3f40f99ca7355409bf423aaf103bd4225adf3da5fb0e7fce21d850758e130bc: openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain/cloud-credential-operator" id=6c7274f4-84bd-4e57-a333-044d8d96bc82 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:25:21 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:21.821574348Z" level=info msg="Starting container: a3f40f99ca7355409bf423aaf103bd4225adf3da5fb0e7fce21d850758e130bc" id=0992c149-5745-48c2-9686-2edf5de49e01 name=/runtime.v1.RuntimeService/StartContainer Jan 16 21:25:21 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:21.875633639Z" level=info msg="Started container" PID=23028 containerID=a3f40f99ca7355409bf423aaf103bd4225adf3da5fb0e7fce21d850758e130bc description=openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain/cloud-credential-operator id=0992c149-5745-48c2-9686-2edf5de49e01 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5f6efe5d237d4bd39536b1922909fcb4def1e85783bb9f2ea13f860ba2f28bfa Jan 16 21:25:21 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_10_config-operator_01_apiserver-Default.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/apiservers.config.openshift.io -n as it already exists Jan 16 21:25:22 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_10_config-operator_01_authentication.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/authentications.config.openshift.io -n as it already exists Jan 16 21:25:22 api-int.lab.ocpipi.lan systemd[1]: Started crio-conmon-dade381b7a787462cdd395a1450b4b07c801ca02f67fa3176714e39f7faa3b2d.scope. Jan 16 21:25:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:22.320404 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" event=&{ID:c3db590e56a311b869092b2d6b1724e5 Type:ContainerStarted Data:f0081ab2100c5e9a7477538e5d8f42f7fe2c8977201da22b8ab342be078a3d1e} Jan 16 21:25:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:22.338447 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" event=&{ID:b8b0f2012ce2b145220be181d7a5aa55 Type:ContainerStarted Data:ee5caea6024f9ae3c4f59f9f40d4339c62ef96606b5351eed3dfffb489236f21} Jan 16 21:25:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:22.338997 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:22.352078 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:22.352199 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:22.352248 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:22 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_10_config-operator_01_console.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/consoles.config.openshift.io -n as it already exists Jan 16 21:25:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:22.360654 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" event=&{ID:a6238b9f1f3a2f2bd2b4b1b0c7962bdd Type:ContainerStarted Data:a3f40f99ca7355409bf423aaf103bd4225adf3da5fb0e7fce21d850758e130bc} Jan 16 21:25:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:22.361673 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:22.364422 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:22.364472 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:22 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:22.364497 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:22 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container dade381b7a787462cdd395a1450b4b07c801ca02f67fa3176714e39f7faa3b2d. Jan 16 21:25:22 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:25:22 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:22.511185198Z" level=info msg="Created container dade381b7a787462cdd395a1450b4b07c801ca02f67fa3176714e39f7faa3b2d: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/cluster-policy-controller" id=4a62e1f3-1463-4c6c-8082-b26e32b862c1 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:25:22 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:22.512667612Z" level=info msg="Starting container: dade381b7a787462cdd395a1450b4b07c801ca02f67fa3176714e39f7faa3b2d" id=9a88a64e-3035-4aab-81bc-4af2510c8121 name=/runtime.v1.RuntimeService/StartContainer Jan 16 21:25:22 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:22.551316370Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa" id=f91c9964-785a-4138-93ee-8081e122606d name=/runtime.v1.ImageService/PullImage Jan 16 21:25:22 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:22.554465265Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa" id=6092e016-4cd3-4482-bbba-25d542d1d880 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:22 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:22.555109014Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:40e15091a793905eb63a02d951105fc5c5904bfb294f8004c052ac950c9ac44a,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-release@sha256:a346fc0c84644e64c726013a98bef0f75e58f246fce1faa83fb6bbbc6d4050aa],Size_:522846560,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=6092e016-4cd3-4482-bbba-25d542d1d880 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:22 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:22.555516841Z" level=info msg="Started container" PID=23095 containerID=dade381b7a787462cdd395a1450b4b07c801ca02f67fa3176714e39f7faa3b2d description=kube-system/bootstrap-kube-controller-manager-localhost.localdomain/cluster-policy-controller id=9a88a64e-3035-4aab-81bc-4af2510c8121 name=/runtime.v1.RuntimeService/StartContainer sandboxID=84a9a9fdb935fdf56b1aa2684295dfb71d8adb68ba31b934ad4cad7e6c1a23d6 Jan 16 21:25:22 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_10_config-operator_01_dns-Default.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/dnses.config.openshift.io -n as it already exists Jan 16 21:25:22 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:22.564763500Z" level=info msg="Creating container: openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain/cluster-version-operator" id=57076738-32e6-4f70-b202-6071d19c98bc name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:25:22 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:22.565116519Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:25:22 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_10_config-operator_01_featuregate.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/featuregates.config.openshift.io -n as it already exists Jan 16 21:25:22 api-int.lab.ocpipi.lan systemd[1]: Started crio-conmon-cc9d16692cc862fc8c12652f65ebedd95f43822660c56cc0ff9bf7a6cf8799d8.scope. Jan 16 21:25:22 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_10_config-operator_01_image.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/images.config.openshift.io -n as it already exists Jan 16 21:25:23 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container cc9d16692cc862fc8c12652f65ebedd95f43822660c56cc0ff9bf7a6cf8799d8. Jan 16 21:25:23 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_10_config-operator_01_imagecontentpolicy.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/imagecontentpolicies.config.openshift.io -n as it already exists Jan 16 21:25:23 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:23.216508591Z" level=info msg="Created container cc9d16692cc862fc8c12652f65ebedd95f43822660c56cc0ff9bf7a6cf8799d8: openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain/cluster-version-operator" id=57076738-32e6-4f70-b202-6071d19c98bc name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:25:23 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:23.224604357Z" level=info msg="Starting container: cc9d16692cc862fc8c12652f65ebedd95f43822660c56cc0ff9bf7a6cf8799d8" id=c793c244-e8d9-439e-b575-b5bec56d66a3 name=/runtime.v1.RuntimeService/StartContainer Jan 16 21:25:23 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:23.258036467Z" level=info msg="Started container" PID=23140 containerID=cc9d16692cc862fc8c12652f65ebedd95f43822660c56cc0ff9bf7a6cf8799d8 description=openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain/cluster-version-operator id=c793c244-e8d9-439e-b575-b5bec56d66a3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3bdeb1297f2bd1ed9ad5b9b094efc69e4b41178913a37eb79f0ca9c388fc9c11 Jan 16 21:25:23 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_10_config-operator_01_imagecontentsourcepolicy.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/imagecontentsourcepolicies.operator.openshift.io -n as it already exists Jan 16 21:25:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:23.384436 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" event=&{ID:c3db590e56a311b869092b2d6b1724e5 Type:ContainerStarted Data:dade381b7a787462cdd395a1450b4b07c801ca02f67fa3176714e39f7faa3b2d} Jan 16 21:25:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:23.384881 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:23.391513 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:23.391619 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:23.391655 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:23.402875 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" event=&{ID:05c96ce8daffad47cf2b15e2a67753ec Type:ContainerStarted Data:cc9d16692cc862fc8c12652f65ebedd95f43822660c56cc0ff9bf7a6cf8799d8} Jan 16 21:25:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:23.403321 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:23.405156 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:23.406720 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:23.406858 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:23.406888 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:23.407506 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:23.408396 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:23.409482 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:23.411691 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:23.418481 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:23.418563 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:23 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:23.418591 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:23 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_10_config-operator_01_imagedigestmirrorset.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/imagedigestmirrorsets.config.openshift.io -n as it already exists Jan 16 21:25:23 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_10_config-operator_01_imagetagmirrorset.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/imagetagmirrorsets.config.openshift.io -n as it already exists Jan 16 21:25:23 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_10_config-operator_01_infrastructure-Default.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/infrastructures.config.openshift.io -n as it already exists Jan 16 21:25:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:24.407100 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:24.408172 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:24.409248 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:24.409353 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:24.409382 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:24.410779 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:24.411149 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:24 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:24.411310 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:24 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_10_config-operator_01_ingress.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/ingresses.config.openshift.io -n as it already exists Jan 16 21:25:24 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_10_config-operator_01_network.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/networks.config.openshift.io -n as it already exists Jan 16 21:25:24 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_10_config-operator_01_node.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/nodes.config.openshift.io -n as it already exists Jan 16 21:25:24 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_10_config-operator_01_oauth.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/oauths.config.openshift.io -n as it already exists Jan 16 21:25:24 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_10_config-operator_01_project.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/projects.config.openshift.io -n as it already exists Jan 16 21:25:25 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_10_config-operator_01_scheduler.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/schedulers.config.openshift.io -n as it already exists Jan 16 21:25:25 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_20_kube-apiserver-operator_00_cr-scc-anyuid.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:anyuid -n as it already exists Jan 16 21:25:25 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_20_kube-apiserver-operator_00_cr-scc-hostaccess.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:hostaccess -n as it already exists Jan 16 21:25:25 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_20_kube-apiserver-operator_00_cr-scc-hostmount-anyuid.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:hostmount -n as it already exists Jan 16 21:25:25 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_20_kube-apiserver-operator_00_cr-scc-hostnetwork-v2.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:hostnetwork-v2 -n as it already exists Jan 16 21:25:26 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_20_kube-apiserver-operator_00_cr-scc-hostnetwork.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:hostnetwork -n as it already exists Jan 16 21:25:26 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_20_kube-apiserver-operator_00_cr-scc-nonroot-v2.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:nonroot-v2 -n as it already exists Jan 16 21:25:26 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_20_kube-apiserver-operator_00_cr-scc-nonroot.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:nonroot -n as it already exists Jan 16 21:25:26 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_20_kube-apiserver-operator_00_cr-scc-privileged.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:privileged -n as it already exists Jan 16 21:25:26 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_20_kube-apiserver-operator_00_cr-scc-restricted-v2.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:restricted-v2 -n as it already exists Jan 16 21:25:27 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_20_kube-apiserver-operator_00_cr-scc-restricted.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:scc:restricted -n as it already exists Jan 16 21:25:27 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_20_kube-apiserver-operator_00_crb-systemauthenticated-scc-restricted-v2.yaml" clusterrolebindings.v1.rbac.authorization.k8s.io/system:openshift:scc:restricted-v2 -n as it already exists Jan 16 21:25:27 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_20_kube-apiserver-operator_00_scc-anyuid.yaml" securitycontextconstraints.v1.security.openshift.io/anyuid -n as it already exists Jan 16 21:25:27 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_20_kube-apiserver-operator_00_scc-hostaccess.yaml" securitycontextconstraints.v1.security.openshift.io/hostaccess -n as it already exists Jan 16 21:25:27 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_20_kube-apiserver-operator_00_scc-hostmount-anyuid.yaml" securitycontextconstraints.v1.security.openshift.io/hostmount-anyuid -n as it already exists Jan 16 21:25:28 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_20_kube-apiserver-operator_00_scc-hostnetwork-v2.yaml" securitycontextconstraints.v1.security.openshift.io/hostnetwork-v2 -n as it already exists Jan 16 21:25:28 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_20_kube-apiserver-operator_00_scc-hostnetwork.yaml" securitycontextconstraints.v1.security.openshift.io/hostnetwork -n as it already exists Jan 16 21:25:28 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_20_kube-apiserver-operator_00_scc-nonroot-v2.yaml" securitycontextconstraints.v1.security.openshift.io/nonroot-v2 -n as it already exists Jan 16 21:25:28 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_20_kube-apiserver-operator_00_scc-nonroot.yaml" securitycontextconstraints.v1.security.openshift.io/nonroot -n as it already exists Jan 16 21:25:28 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_20_kube-apiserver-operator_00_scc-privileged.yaml" securitycontextconstraints.v1.security.openshift.io/privileged -n as it already exists Jan 16 21:25:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:29.159117 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:29.164655 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:29.165059 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:29.165168 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:29 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_20_kube-apiserver-operator_00_scc-restricted-v2.yaml" securitycontextconstraints.v1.security.openshift.io/restricted-v2 -n as it already exists Jan 16 21:25:29 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0000_20_kube-apiserver-operator_00_scc-restricted.yaml" securitycontextconstraints.v1.security.openshift.io/restricted -n as it already exists Jan 16 21:25:29 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "0001_00_cluster-version-operator_03_service.yaml" services.v1./cluster-version-operator -n openshift-cluster-version as it already exists Jan 16 21:25:29 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "00_etcd-endpoints-cm.yaml" configmaps.v1./etcd-endpoints -n openshift-etcd as it already exists Jan 16 21:25:29 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "00_namespace-security-allocation-controller-clusterrole.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:controller:namespace-security-allocation-controller -n as it already exists Jan 16 21:25:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:29.975290 2579 kubelet.go:2529] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:25:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:29.975441 2579 kubelet.go:2529] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:25:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:29.975496 2579 kubelet.go:2529] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:25:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:29.975535 2579 kubelet.go:2529] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:25:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:29.976291 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:29.980023 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:29.980381 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:29.980743 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:29 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:29.999412 2579 kubelet.go:2529] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:25:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:30.010528 2579 kubelet.go:2529] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:25:30 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "00_namespace-security-allocation-controller-clusterrolebinding.yaml" clusterrolebindings.v1.rbac.authorization.k8s.io/system:openshift:controller:namespace-security-allocation-controller -n as it already exists Jan 16 21:25:30 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "00_openshift-etcd-ns.yaml" namespaces.v1./openshift-etcd -n as it already exists Jan 16 21:25:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:30.453053 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:30.456745 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:30.457767 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:30 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:30.458079 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:30 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "00_openshift-kube-apiserver-ns.yaml" namespaces.v1./openshift-kube-apiserver -n as it already exists Jan 16 21:25:30 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "00_openshift-kube-apiserver-operator-ns.yaml" namespaces.v1./openshift-kube-apiserver-operator -n as it already exists Jan 16 21:25:31 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "00_openshift-kube-controller-manager-ns.yaml" namespaces.v1./openshift-kube-controller-manager -n as it already exists Jan 16 21:25:31 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "00_openshift-kube-controller-manager-operator-ns.yaml" namespaces.v1./openshift-kube-controller-manager-operator -n as it already exists Jan 16 21:25:31 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "00_openshift-kube-scheduler-ns.yaml" namespaces.v1./openshift-kube-scheduler -n as it already exists Jan 16 21:25:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:31.458017 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:31.460807 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:31.461076 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:31.461106 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:31 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:31.491477 2579 kubelet.go:2529] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:25:31 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "00_podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:controller:privileged-namespaces-psa-label-syncer -n as it already exists Jan 16 21:25:31 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "00_podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml" clusterrolebindings.v1.rbac.authorization.k8s.io/system:openshift:controller:privileged-namespaces-psa-label-syncer -n as it already exists Jan 16 21:25:31 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "00_podsecurity-admission-label-syncer-controller-clusterrole.yaml" clusterroles.v1.rbac.authorization.k8s.io/system:openshift:controller:podsecurity-admission-label-syncer-controller -n as it already exists Jan 16 21:25:32 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "00_podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml" clusterrolebindings.v1.rbac.authorization.k8s.io/system:openshift:controller:podsecurity-admission-label-syncer-controller -n as it already exists Jan 16 21:25:32 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "99_baremetal-provisioning-config.yaml" provisionings.v1alpha1.metal3.io/provisioning-configuration -n as it already exists Jan 16 21:25:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:32.465582 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:32.469903 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:32.470247 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:32.470308 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:32 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:32.517374 2579 kubelet.go:2529] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:25:32 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "99_feature-gate.yaml" featuregates.v1.config.openshift.io/cluster -n as it already exists Jan 16 21:25:32 api-int.lab.ocpipi.lan approve-csr.sh[23213]: No resources found Jan 16 21:25:32 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "99_kubeadmin-password-secret.yaml" secrets.v1./kubeadmin -n kube-system as it already exists Jan 16 21:25:32 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "99_openshift-cluster-api_host-bmc-secrets-0.yaml" secrets.v1./cp-1-bmc-secret -n openshift-machine-api as it already exists Jan 16 21:25:33 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "99_openshift-cluster-api_host-bmc-secrets-1.yaml" secrets.v1./cp-2-bmc-secret -n openshift-machine-api as it already exists Jan 16 21:25:33 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "99_openshift-cluster-api_host-bmc-secrets-2.yaml" secrets.v1./cp-3-bmc-secret -n openshift-machine-api as it already exists Jan 16 21:25:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:33.471392 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:33.480240 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:33.480515 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:33 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:33.480617 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:33 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "99_openshift-cluster-api_host-bmc-secrets-3.yaml" secrets.v1./w-1-bmc-secret -n openshift-machine-api as it already exists Jan 16 21:25:33 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "99_openshift-cluster-api_host-bmc-secrets-4.yaml" secrets.v1./w-2-bmc-secret -n openshift-machine-api as it already exists Jan 16 21:25:33 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "99_openshift-cluster-api_hosts-0.yaml" baremetalhosts.v1alpha1.metal3.io/cp-1 -n openshift-machine-api as it already exists Jan 16 21:25:34 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "99_openshift-cluster-api_hosts-1.yaml" baremetalhosts.v1alpha1.metal3.io/cp-2 -n openshift-machine-api as it already exists Jan 16 21:25:34 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "99_openshift-cluster-api_hosts-2.yaml" baremetalhosts.v1alpha1.metal3.io/cp-3 -n openshift-machine-api as it already exists Jan 16 21:25:34 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "99_openshift-cluster-api_hosts-3.yaml" baremetalhosts.v1alpha1.metal3.io/w-1 -n openshift-machine-api as it already exists Jan 16 21:25:34 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "99_openshift-cluster-api_hosts-4.yaml" baremetalhosts.v1alpha1.metal3.io/w-2 -n openshift-machine-api as it already exists Jan 16 21:25:34 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "99_openshift-cluster-api_master-machines-0.yaml" machines.v1beta1.machine.openshift.io/lab-wcpsl-master-0 -n openshift-machine-api as it already exists Jan 16 21:25:35 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "99_openshift-cluster-api_master-machines-1.yaml" machines.v1beta1.machine.openshift.io/lab-wcpsl-master-1 -n openshift-machine-api as it already exists Jan 16 21:25:35 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "99_openshift-cluster-api_master-machines-2.yaml" machines.v1beta1.machine.openshift.io/lab-wcpsl-master-2 -n openshift-machine-api as it already exists Jan 16 21:25:35 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "99_openshift-cluster-api_master-user-data-secret.yaml" secrets.v1./master-user-data-managed -n openshift-machine-api as it already exists Jan 16 21:25:35 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "99_openshift-cluster-api_worker-machineset-0.yaml" machinesets.v1beta1.machine.openshift.io/lab-wcpsl-worker-0 -n openshift-machine-api as it already exists Jan 16 21:25:35 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "99_openshift-cluster-api_worker-user-data-secret.yaml" secrets.v1./worker-user-data-managed -n openshift-machine-api as it already exists Jan 16 21:25:36 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "99_openshift-machineconfig_99-master-ssh.yaml" machineconfigs.v1.machineconfiguration.openshift.io/99-master-ssh -n as it already exists Jan 16 21:25:36 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "99_openshift-machineconfig_99-worker-ssh.yaml" machineconfigs.v1.machineconfiguration.openshift.io/99-worker-ssh -n as it already exists Jan 16 21:25:36 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "apiserver.openshift.io_apirequestcount.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/apirequestcounts.apiserver.openshift.io -n as it already exists Jan 16 21:25:36 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "cco-cloudcredential_v1_credentialsrequest_crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/credentialsrequests.cloudcredential.openshift.io -n as it already exists Jan 16 21:25:36 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "cco-cloudcredential_v1_operator_config_custresdef.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/cloudcredentials.operator.openshift.io -n as it already exists Jan 16 21:25:37 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "cco-namespace.yaml" namespaces.v1./openshift-cloud-credential-operator -n as it already exists Jan 16 21:25:37 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "cco-operator-config.yaml" cloudcredentials.v1.operator.openshift.io/cluster -n as it already exists Jan 16 21:25:37 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "cluster-config.yaml" configmaps.v1./cluster-config-v1 -n kube-system as it already exists Jan 16 21:25:37 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "cluster-dns-02-config.yml" dnses.v1.config.openshift.io/cluster -n as it already exists Jan 16 21:25:37 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "cluster-infrastructure-02-config.yml" infrastructures.v1.config.openshift.io/cluster -n as it already exists Jan 16 21:25:38 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "cluster-ingress-00-custom-resource-definition.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/ingresscontrollers.operator.openshift.io -n as it already exists Jan 16 21:25:38 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "cluster-ingress-00-namespace.yaml" namespaces.v1./openshift-ingress-operator -n as it already exists Jan 16 21:25:38 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "cluster-ingress-02-config.yml" ingresses.v1.config.openshift.io/cluster -n as it already exists Jan 16 21:25:38 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.hnjWly.mount: Deactivated successfully. Jan 16 21:25:38 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "cluster-network-01-crd.yml" customresourcedefinitions.v1.apiextensions.k8s.io/networks.config.openshift.io -n as it already exists Jan 16 21:25:38 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "cluster-network-02-config.yml" networks.v1.config.openshift.io/cluster -n as it already exists Jan 16 21:25:39 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "cluster-proxy-01-config.yaml" proxies.v1.config.openshift.io/cluster -n as it already exists Jan 16 21:25:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:39.223125 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:39.231581 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:39.231909 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:39 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:39.232155 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:39 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "cluster-role-binding-kube-apiserver.yaml" clusterrolebindings.v1.rbac.authorization.k8s.io/kube-apiserver -n as it already exists Jan 16 21:25:39 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "cluster-role-kube-apiserver.yaml" clusterroles.v1.rbac.authorization.k8s.io/kube-apiserver -n as it already exists Jan 16 21:25:39 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "cluster-scheduler-02-config.yml" schedulers.v1.config.openshift.io/cluster -n as it already exists Jan 16 21:25:39 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "configmap-admin-kubeconfig-client-ca.yaml" configmaps.v1./admin-kubeconfig-client-ca -n openshift-config as it already exists Jan 16 21:25:40 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "configmap-csr-controller-ca.yaml" configmaps.v1./csr-controller-ca -n openshift-config-managed as it already exists Jan 16 21:25:40 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "configmap-kubelet-bootstrap-kubeconfig-ca.yaml" configmaps.v1./kubelet-bootstrap-kubeconfig -n openshift-config-managed as it already exists Jan 16 21:25:40 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "configmap-sa-token-signing-certs.yaml" configmaps.v1./sa-token-signing-certs -n openshift-config-managed as it already exists Jan 16 21:25:40 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "csr-bootstrap-role-binding.yaml" clusterrolebindings.v1.rbac.authorization.k8s.io/system-bootstrap-node-bootstrapper -n as it already exists Jan 16 21:25:40 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "cvo-overrides.yaml" clusterversions.v1.config.openshift.io/version -n openshift-cluster-version as it already exists Jan 16 21:25:41 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "etcd-ca-bundle-configmap.yaml" configmaps.v1./etcd-ca-bundle -n openshift-config as it already exists Jan 16 21:25:41 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "etcd-client-secret.yaml" secrets.v1./etcd-client -n openshift-config as it already exists Jan 16 21:25:41 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "etcd-metric-client-secret.yaml" secrets.v1./etcd-metric-client -n openshift-config as it already exists Jan 16 21:25:41 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "etcd-metric-serving-ca-configmap.yaml" configmaps.v1./etcd-metric-serving-ca -n openshift-config as it already exists Jan 16 21:25:41 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "etcd-metric-signer-secret.yaml" secrets.v1./etcd-metric-signer -n openshift-config as it already exists Jan 16 21:25:42 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "etcd-serving-ca-configmap.yaml" configmaps.v1./etcd-serving-ca -n openshift-config as it already exists Jan 16 21:25:42 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "etcd-signer-secret.yaml" secrets.v1./etcd-signer -n openshift-config as it already exists Jan 16 21:25:42 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "kube-apiserver-serving-ca-configmap.yaml" configmaps.v1./initial-kube-apiserver-server-ca -n openshift-config as it already exists Jan 16 21:25:42 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:25:42 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "kube-cloud-config.yaml" secrets.v1./kube-cloud-cfg -n kube-system as it already exists Jan 16 21:25:42 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "kube-system-configmap-root-ca.yaml" configmaps.v1./root-ca -n kube-system as it already exists Jan 16 21:25:43 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "machine-config-server-tls-secret.yaml" secrets.v1./machine-config-server-tls -n openshift-machine-config-operator as it already exists Jan 16 21:25:43 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "openshift-config-secret-pull-secret.yaml" secrets.v1./pull-secret -n openshift-config as it already exists Jan 16 21:25:43 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "openshift-etcd-svc.yaml" services.v1./etcd -n openshift-etcd as it already exists Jan 16 21:25:43 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "openshift-install-manifests.yaml" configmaps.v1./openshift-install-manifests -n openshift-config as it already exists Jan 16 21:25:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:43.851623 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 21:25:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:43.852109 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 21:25:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:43.852166 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 21:25:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:43.852276 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 21:25:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:43.852339 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 21:25:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:43.852396 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 21:25:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:43.852444 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 21:25:43 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:43.852485 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 21:25:43 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "openshift-install.yaml" configmaps.v1./openshift-install -n openshift-config as it already exists Jan 16 21:25:44 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "secret-aggregator-client-signer.yaml" secrets.v1./aggregator-client-signer -n openshift-kube-apiserver-operator as it already exists Jan 16 21:25:44 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "secret-bound-sa-token-signing-key.yaml" secrets.v1./next-bound-service-account-signing-key -n openshift-kube-apiserver-operator as it already exists Jan 16 21:25:44 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "secret-control-plane-client-signer.yaml" secrets.v1./kube-control-plane-signer -n openshift-kube-apiserver-operator as it already exists Jan 16 21:25:44 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "secret-csr-signer-signer.yaml" secrets.v1./csr-signer-signer -n openshift-kube-controller-manager-operator as it already exists Jan 16 21:25:44 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "secret-initial-kube-controller-manager-service-account-private-key.yaml" secrets.v1./initial-service-account-private-key -n openshift-config as it already exists Jan 16 21:25:45 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "secret-kube-apiserver-to-kubelet-signer.yaml" secrets.v1./kube-apiserver-to-kubelet-signer -n openshift-kube-apiserver-operator as it already exists Jan 16 21:25:45 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "secret-loadbalancer-serving-signer.yaml" secrets.v1./loadbalancer-serving-signer -n openshift-kube-apiserver-operator as it already exists Jan 16 21:25:45 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "secret-localhost-serving-signer.yaml" secrets.v1./localhost-serving-signer -n openshift-kube-apiserver-operator as it already exists Jan 16 21:25:45 api-int.lab.ocpipi.lan bootkube.sh[22872]: Skipped "secret-service-network-serving-signer.yaml" secrets.v1./service-network-serving-signer -n openshift-kube-apiserver-operator as it already exists Jan 16 21:25:48 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.tZNJRy.mount: Deactivated successfully. Jan 16 21:25:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:49.325637 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:49.334322 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:49.335171 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:49 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:49.335752 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:52 api-int.lab.ocpipi.lan systemd[1]: crio-d2372fe55957bd2b5d77cdbd933ab77b9ab3973bd9ccdcd935e102a58360d1e6.scope: Deactivated successfully. Jan 16 21:25:52 api-int.lab.ocpipi.lan systemd[1]: crio-d2372fe55957bd2b5d77cdbd933ab77b9ab3973bd9ccdcd935e102a58360d1e6.scope: Consumed 8min 49.109s CPU time. Jan 16 21:25:52 api-int.lab.ocpipi.lan systemd[1]: crio-conmon-d2372fe55957bd2b5d77cdbd933ab77b9ab3973bd9ccdcd935e102a58360d1e6.scope: Deactivated successfully. Jan 16 21:25:52 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-e72d8291fb0852fb2342cc346a7d8201fe796c38a32caa667d93c80a37e3c61a-merged.mount: Deactivated successfully. Jan 16 21:25:52 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:52.252273715Z" level=info msg="Stopped container d2372fe55957bd2b5d77cdbd933ab77b9ab3973bd9ccdcd935e102a58360d1e6: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/kube-apiserver" id=00ef27d7-fa78-4282-9373-d1a76e9fa8be name=/runtime.v1.RuntimeService/StopContainer Jan 16 21:25:52 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:52.255512876Z" level=info msg="Stopping pod sandbox: c6fca5c97022384178f10593b3c69027ffc4f49d245087f693d7ec56d9af4cc6" id=5cdb03ba-a027-4fcf-a408-2d2d4a972753 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:25:52 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-532bb2098bde6e8630243e4acc59b4ece7c06f761f67a7a351d5899981b84059-merged.mount: Deactivated successfully. Jan 16 21:25:52 api-int.lab.ocpipi.lan systemd[1]: run-ipcns-e1006dcc\x2df0c4\x2d4d4a\x2d82fb\x2d0cf55d02240a.mount: Deactivated successfully. Jan 16 21:25:52 api-int.lab.ocpipi.lan systemd[1]: run-utsns-e1006dcc\x2df0c4\x2d4d4a\x2d82fb\x2d0cf55d02240a.mount: Deactivated successfully. Jan 16 21:25:52 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:52.290317915Z" level=info msg="Stopped pod sandbox: c6fca5c97022384178f10593b3c69027ffc4f49d245087f693d7ec56d9af4cc6" id=5cdb03ba-a027-4fcf-a408-2d2d4a972753 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:25:52 api-int.lab.ocpipi.lan systemd[1]: run-netns-e1006dcc\x2df0c4\x2d4d4a\x2d82fb\x2d0cf55d02240a.mount: Deactivated successfully. Jan 16 21:25:52 api-int.lab.ocpipi.lan systemd[1]: run-containers-storage-overlay\x2dcontainers-c6fca5c97022384178f10593b3c69027ffc4f49d245087f693d7ec56d9af4cc6-userdata-shm.mount: Deactivated successfully. Jan 16 21:25:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:52.410535 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-config\") pod \"1cb3be1f2df5273e9b77f7050777bcbe\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " Jan 16 21:25:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:52.411367 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-ssl-certs-host\") pod \"1cb3be1f2df5273e9b77f7050777bcbe\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " Jan 16 21:25:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:52.411455 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-etc-kubernetes-cloud\") pod \"1cb3be1f2df5273e9b77f7050777bcbe\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " Jan 16 21:25:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:52.411523 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-logs\") pod \"1cb3be1f2df5273e9b77f7050777bcbe\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " Jan 16 21:25:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:52.411586 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-audit-dir\") pod \"1cb3be1f2df5273e9b77f7050777bcbe\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " Jan 16 21:25:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:52.411651 2579 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-secrets\") pod \"1cb3be1f2df5273e9b77f7050777bcbe\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " Jan 16 21:25:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:52.412083 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-secrets" (OuterVolumeSpecName: "secrets") pod "1cb3be1f2df5273e9b77f7050777bcbe" (UID: "1cb3be1f2df5273e9b77f7050777bcbe"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:25:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:52.411061 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-config" (OuterVolumeSpecName: "config") pod "1cb3be1f2df5273e9b77f7050777bcbe" (UID: "1cb3be1f2df5273e9b77f7050777bcbe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:25:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:52.412234 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "1cb3be1f2df5273e9b77f7050777bcbe" (UID: "1cb3be1f2df5273e9b77f7050777bcbe"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:25:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:52.412363 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "1cb3be1f2df5273e9b77f7050777bcbe" (UID: "1cb3be1f2df5273e9b77f7050777bcbe"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:25:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:52.412437 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "1cb3be1f2df5273e9b77f7050777bcbe" (UID: "1cb3be1f2df5273e9b77f7050777bcbe"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:25:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:52.412443 2579 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-logs" (OuterVolumeSpecName: "logs") pod "1cb3be1f2df5273e9b77f7050777bcbe" (UID: "1cb3be1f2df5273e9b77f7050777bcbe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 21:25:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:52.513143 2579 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-config\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:25:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:52.513320 2579 reconciler_common.go:300] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-ssl-certs-host\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:25:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:52.513378 2579 reconciler_common.go:300] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-etc-kubernetes-cloud\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:25:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:52.513432 2579 reconciler_common.go:300] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-logs\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:25:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:52.513474 2579 reconciler_common.go:300] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-audit-dir\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:25:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:52.513539 2579 reconciler_common.go:300] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-secrets\") on node \"localhost.localdomain\" DevicePath \"\"" Jan 16 21:25:52 api-int.lab.ocpipi.lan systemd[1]: Removed slice libcontainer container kubepods-burstable-pod1cb3be1f2df5273e9b77f7050777bcbe.slice. Jan 16 21:25:52 api-int.lab.ocpipi.lan systemd[1]: kubepods-burstable-pod1cb3be1f2df5273e9b77f7050777bcbe.slice: Consumed 8min 50.033s CPU time. Jan 16 21:25:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:52.645454 2579 generic.go:334] "Generic (PLEG): container finished" podID=1cb3be1f2df5273e9b77f7050777bcbe containerID="d2372fe55957bd2b5d77cdbd933ab77b9ab3973bd9ccdcd935e102a58360d1e6" exitCode=0 Jan 16 21:25:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:52.645670 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" event=&{ID:1cb3be1f2df5273e9b77f7050777bcbe Type:ContainerDied Data:d2372fe55957bd2b5d77cdbd933ab77b9ab3973bd9ccdcd935e102a58360d1e6} Jan 16 21:25:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:52.645760 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" event=&{ID:1cb3be1f2df5273e9b77f7050777bcbe Type:ContainerDied Data:c6fca5c97022384178f10593b3c69027ffc4f49d245087f693d7ec56d9af4cc6} Jan 16 21:25:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:52.646142 2579 scope.go:115] "RemoveContainer" containerID="832bc24a6eaa010384b99939a6b7ea8f63015c7a33f91f7a705041aac859cca0" Jan 16 21:25:52 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:52.654473024Z" level=info msg="Removing container: 832bc24a6eaa010384b99939a6b7ea8f63015c7a33f91f7a705041aac859cca0" id=6bff869d-8d1d-4299-aa5e-023e9ff3bb9c name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:25:52 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:52.737164894Z" level=info msg="Removed container 832bc24a6eaa010384b99939a6b7ea8f63015c7a33f91f7a705041aac859cca0: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/kube-apiserver-insecure-readyz" id=6bff869d-8d1d-4299-aa5e-023e9ff3bb9c name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:25:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:52.739234 2579 scope.go:115] "RemoveContainer" containerID="d2372fe55957bd2b5d77cdbd933ab77b9ab3973bd9ccdcd935e102a58360d1e6" Jan 16 21:25:52 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:52.746610311Z" level=info msg="Removing container: d2372fe55957bd2b5d77cdbd933ab77b9ab3973bd9ccdcd935e102a58360d1e6" id=a0f41511-36b2-4c0a-a4aa-254228c596a0 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:25:52 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:52.896489426Z" level=info msg="Removed container d2372fe55957bd2b5d77cdbd933ab77b9ab3973bd9ccdcd935e102a58360d1e6: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/kube-apiserver" id=a0f41511-36b2-4c0a-a4aa-254228c596a0 name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:25:52 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:52.908413 2579 scope.go:115] "RemoveContainer" containerID="88e9088f133f5391f3e9167ca5757042fb9276f23dff38dc04b643fdf15c5372" Jan 16 21:25:52 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:52.914447971Z" level=info msg="Removing container: 88e9088f133f5391f3e9167ca5757042fb9276f23dff38dc04b643fdf15c5372" id=dc060afc-1323-469b-8223-49466b66e42b name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:25:53 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:53.086720608Z" level=info msg="Removed container 88e9088f133f5391f3e9167ca5757042fb9276f23dff38dc04b643fdf15c5372: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/setup" id=dc060afc-1323-469b-8223-49466b66e42b name=/runtime.v1.RuntimeService/RemoveContainer Jan 16 21:25:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:53.088910 2579 scope.go:115] "RemoveContainer" containerID="832bc24a6eaa010384b99939a6b7ea8f63015c7a33f91f7a705041aac859cca0" Jan 16 21:25:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:25:53.091407 2579 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"832bc24a6eaa010384b99939a6b7ea8f63015c7a33f91f7a705041aac859cca0\": container with ID starting with 832bc24a6eaa010384b99939a6b7ea8f63015c7a33f91f7a705041aac859cca0 not found: ID does not exist" containerID="832bc24a6eaa010384b99939a6b7ea8f63015c7a33f91f7a705041aac859cca0" Jan 16 21:25:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:53.091575 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:832bc24a6eaa010384b99939a6b7ea8f63015c7a33f91f7a705041aac859cca0} err="failed to get container status \"832bc24a6eaa010384b99939a6b7ea8f63015c7a33f91f7a705041aac859cca0\": rpc error: code = NotFound desc = could not find container \"832bc24a6eaa010384b99939a6b7ea8f63015c7a33f91f7a705041aac859cca0\": container with ID starting with 832bc24a6eaa010384b99939a6b7ea8f63015c7a33f91f7a705041aac859cca0 not found: ID does not exist" Jan 16 21:25:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:53.091623 2579 scope.go:115] "RemoveContainer" containerID="d2372fe55957bd2b5d77cdbd933ab77b9ab3973bd9ccdcd935e102a58360d1e6" Jan 16 21:25:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:25:53.094298 2579 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2372fe55957bd2b5d77cdbd933ab77b9ab3973bd9ccdcd935e102a58360d1e6\": container with ID starting with d2372fe55957bd2b5d77cdbd933ab77b9ab3973bd9ccdcd935e102a58360d1e6 not found: ID does not exist" containerID="d2372fe55957bd2b5d77cdbd933ab77b9ab3973bd9ccdcd935e102a58360d1e6" Jan 16 21:25:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:53.094448 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:d2372fe55957bd2b5d77cdbd933ab77b9ab3973bd9ccdcd935e102a58360d1e6} err="failed to get container status \"d2372fe55957bd2b5d77cdbd933ab77b9ab3973bd9ccdcd935e102a58360d1e6\": rpc error: code = NotFound desc = could not find container \"d2372fe55957bd2b5d77cdbd933ab77b9ab3973bd9ccdcd935e102a58360d1e6\": container with ID starting with d2372fe55957bd2b5d77cdbd933ab77b9ab3973bd9ccdcd935e102a58360d1e6 not found: ID does not exist" Jan 16 21:25:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:53.094492 2579 scope.go:115] "RemoveContainer" containerID="88e9088f133f5391f3e9167ca5757042fb9276f23dff38dc04b643fdf15c5372" Jan 16 21:25:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: E0116 21:25:53.097271 2579 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88e9088f133f5391f3e9167ca5757042fb9276f23dff38dc04b643fdf15c5372\": container with ID starting with 88e9088f133f5391f3e9167ca5757042fb9276f23dff38dc04b643fdf15c5372 not found: ID does not exist" containerID="88e9088f133f5391f3e9167ca5757042fb9276f23dff38dc04b643fdf15c5372" Jan 16 21:25:53 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:53.097379 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:88e9088f133f5391f3e9167ca5757042fb9276f23dff38dc04b643fdf15c5372} err="failed to get container status \"88e9088f133f5391f3e9167ca5757042fb9276f23dff38dc04b643fdf15c5372\": rpc error: code = NotFound desc = could not find container \"88e9088f133f5391f3e9167ca5757042fb9276f23dff38dc04b643fdf15c5372\": container with ID starting with 88e9088f133f5391f3e9167ca5757042fb9276f23dff38dc04b643fdf15c5372 not found: ID does not exist" Jan 16 21:25:53 api-int.lab.ocpipi.lan systemd[1]: var-lib-containers-storage-overlay-4e899c13adc71983002391ccf4e5dadd75e15316fa9d1a6065334129db140546-merged.mount: Deactivated successfully. Jan 16 21:25:53 api-int.lab.ocpipi.lan approve-csr.sh[23311]: E0116 21:25:53.357188 23311 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 21:25:53 api-int.lab.ocpipi.lan approve-csr.sh[23311]: E0116 21:25:53.361100 23311 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 21:25:53 api-int.lab.ocpipi.lan approve-csr.sh[23311]: E0116 21:25:53.364448 23311 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 21:25:53 api-int.lab.ocpipi.lan approve-csr.sh[23311]: E0116 21:25:53.367140 23311 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 21:25:53 api-int.lab.ocpipi.lan approve-csr.sh[23311]: E0116 21:25:53.371338 23311 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Jan 16 21:25:53 api-int.lab.ocpipi.lan approve-csr.sh[23311]: The connection to the server localhost:6443 was refused - did you specify the right host or port? Jan 16 21:25:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:54.534533 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-audit-dir\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:25:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:54.534780 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-config\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:25:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:54.535183 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-ssl-certs-host\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:25:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:54.535276 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-logs\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:25:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:54.535369 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-secrets\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:25:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:54.535456 2579 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:25:54 api-int.lab.ocpipi.lan systemd[1]: Created slice libcontainer container kubepods-burstable-pod1cb3be1f2df5273e9b77f7050777bcbe.slice. Jan 16 21:25:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:54.577753 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:54.583512 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:54.584450 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:54.584623 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:54.636389 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-logs\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:25:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:54.636637 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-secrets\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:25:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:54.636748 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:25:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:54.637239 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-audit-dir\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:25:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:54.637348 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-config\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:25:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:54.637444 2579 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-ssl-certs-host\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:25:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:54.637635 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-ssl-certs-host\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:25:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:54.637782 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-secrets\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:25:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:54.638285 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:25:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:54.638551 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-audit-dir\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:25:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:54.638578 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-config\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:25:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:54.639160 2579 operation_generator.go:718] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/1cb3be1f2df5273e9b77f7050777bcbe-logs\") pod \"bootstrap-kube-apiserver-localhost.localdomain\" (UID: \"1cb3be1f2df5273e9b77f7050777bcbe\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:25:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:54.887542 2579 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:25:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:54.888614 2579 kubelet.go:2529] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:25:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:54.889164 2579 kubelet.go:2529] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:25:54 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:54.890731810Z" level=info msg="Stopping pod sandbox: c6fca5c97022384178f10593b3c69027ffc4f49d245087f693d7ec56d9af4cc6" id=aab0660f-d2f8-4dd1-9a1b-5963e762c726 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:25:54 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:54.891203831Z" level=info msg="Stopped pod sandbox (already stopped): c6fca5c97022384178f10593b3c69027ffc4f49d245087f693d7ec56d9af4cc6" id=aab0660f-d2f8-4dd1-9a1b-5963e762c726 name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:25:54 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:54.893764625Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/POD" id=621b6679-2f56-4f20-9846-90a8d806cbdc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 21:25:54 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:54.894277437Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:25:54 api-int.lab.ocpipi.lan kubelet.sh[2579]: W0116 21:25:54.971653 2579 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cb3be1f2df5273e9b77f7050777bcbe.slice/crio-c17de807e332ba031e33470df1ad799d4ee4b1f8bc6b7724534074d3ed9e5359 WatchSource:0}: Error finding container c17de807e332ba031e33470df1ad799d4ee4b1f8bc6b7724534074d3ed9e5359: Status 404 returned error can't find the container with id c17de807e332ba031e33470df1ad799d4ee4b1f8bc6b7724534074d3ed9e5359 Jan 16 21:25:54 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:54.974217831Z" level=info msg="Ran pod sandbox c17de807e332ba031e33470df1ad799d4ee4b1f8bc6b7724534074d3ed9e5359 with infra container: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/POD" id=621b6679-2f56-4f20-9846-90a8d806cbdc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 16 21:25:54 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:54.990214267Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=42f1fcac-6aaa-418d-9b77-558684684155 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:54 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:54.991536306Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=42f1fcac-6aaa-418d-9b77-558684684155 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:54 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:54.997383577Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=aba4941c-0d6c-4fc4-8a7e-988dcd85cb00 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:54 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:54.998295040Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=aba4941c-0d6c-4fc4-8a7e-988dcd85cb00 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:55 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:55.002529005Z" level=info msg="Creating container: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/setup" id=a00e323b-9c2f-4e2f-9f42-25dc5f4719f8 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:25:55 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:55.003482153Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:25:55 api-int.lab.ocpipi.lan systemd[1]: Started crio-conmon-0add7b62bd625ab77bc8392fc0e99d8dc05482ec18d1731dadb3185dc07eea82.scope. Jan 16 21:25:55 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:55.686372 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" event=&{ID:1cb3be1f2df5273e9b77f7050777bcbe Type:ContainerStarted Data:c17de807e332ba031e33470df1ad799d4ee4b1f8bc6b7724534074d3ed9e5359} Jan 16 21:25:55 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 0add7b62bd625ab77bc8392fc0e99d8dc05482ec18d1731dadb3185dc07eea82. Jan 16 21:25:55 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:55.928312122Z" level=info msg="Created container 0add7b62bd625ab77bc8392fc0e99d8dc05482ec18d1731dadb3185dc07eea82: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/setup" id=a00e323b-9c2f-4e2f-9f42-25dc5f4719f8 name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:25:55 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:55.933633692Z" level=info msg="Starting container: 0add7b62bd625ab77bc8392fc0e99d8dc05482ec18d1731dadb3185dc07eea82" id=8f2d1344-8b79-410d-94de-db9e76788414 name=/runtime.v1.RuntimeService/StartContainer Jan 16 21:25:55 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:55.988272392Z" level=info msg="Started container" PID=23356 containerID=0add7b62bd625ab77bc8392fc0e99d8dc05482ec18d1731dadb3185dc07eea82 description=openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/setup id=8f2d1344-8b79-410d-94de-db9e76788414 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c17de807e332ba031e33470df1ad799d4ee4b1f8bc6b7724534074d3ed9e5359 Jan 16 21:25:56 api-int.lab.ocpipi.lan systemd[1]: crio-0add7b62bd625ab77bc8392fc0e99d8dc05482ec18d1731dadb3185dc07eea82.scope: Deactivated successfully. Jan 16 21:25:56 api-int.lab.ocpipi.lan systemd[1]: crio-conmon-0add7b62bd625ab77bc8392fc0e99d8dc05482ec18d1731dadb3185dc07eea82.scope: Deactivated successfully. Jan 16 21:25:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:56.702481 2579 generic.go:334] "Generic (PLEG): container finished" podID=1cb3be1f2df5273e9b77f7050777bcbe containerID="0add7b62bd625ab77bc8392fc0e99d8dc05482ec18d1731dadb3185dc07eea82" exitCode=0 Jan 16 21:25:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:56.702711 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" event=&{ID:1cb3be1f2df5273e9b77f7050777bcbe Type:ContainerDied Data:0add7b62bd625ab77bc8392fc0e99d8dc05482ec18d1731dadb3185dc07eea82} Jan 16 21:25:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:56.704215 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:56.710492 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:56.710672 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:56.710726 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:56 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:56.712578703Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=179eb3b5-ba9c-422a-81b1-f04ed925456e name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:56 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:56.715666656Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=179eb3b5-ba9c-422a-81b1-f04ed925456e name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:56.717414 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:56.724908 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:56.725323 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:56 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:56.725383 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:56 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:56.726569452Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=f887cd11-d34f-4d6d-8b80-e8308c65dca0 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:56 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:56.727466335Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=f887cd11-d34f-4d6d-8b80-e8308c65dca0 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:56 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:56.731530772Z" level=info msg="Creating container: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/kube-apiserver" id=bab2c7d3-276b-4272-86a9-4165ab6314aa name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:25:56 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:56.732539804Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:25:57 api-int.lab.ocpipi.lan systemd[1]: Started crio-conmon-adeb5fb43f729950b1ba8f87e5cdefec08733e2ba730cabf369a54b8fdf91fdc.scope. Jan 16 21:25:57 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container adeb5fb43f729950b1ba8f87e5cdefec08733e2ba730cabf369a54b8fdf91fdc. Jan 16 21:25:57 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:57.543644922Z" level=info msg="Created container adeb5fb43f729950b1ba8f87e5cdefec08733e2ba730cabf369a54b8fdf91fdc: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/kube-apiserver" id=bab2c7d3-276b-4272-86a9-4165ab6314aa name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:25:57 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:57.548498179Z" level=info msg="Starting container: adeb5fb43f729950b1ba8f87e5cdefec08733e2ba730cabf369a54b8fdf91fdc" id=c6636ee4-1299-44a9-9a49-bf6625f0a9f0 name=/runtime.v1.RuntimeService/StartContainer Jan 16 21:25:57 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:57.591747156Z" level=info msg="Started container" PID=23412 containerID=adeb5fb43f729950b1ba8f87e5cdefec08733e2ba730cabf369a54b8fdf91fdc description=openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/kube-apiserver id=c6636ee4-1299-44a9-9a49-bf6625f0a9f0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c17de807e332ba031e33470df1ad799d4ee4b1f8bc6b7724534074d3ed9e5359 Jan 16 21:25:57 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:57.652491011Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c074b99f606a6eba6b937f3d96115ec5790b747f6c0b6f6eed01e4f1a3a189eb" id=a27d7b4c-aa12-4564-a8ac-d771b89af05e name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:57 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:57.653363877Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba904bf53d6c9cd58209eebeead820a9fc257a3eef7e2301313cd33072c494dd,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c074b99f606a6eba6b937f3d96115ec5790b747f6c0b6f6eed01e4f1a3a189eb],Size_:546075839,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=a27d7b4c-aa12-4564-a8ac-d771b89af05e name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:57 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:57.656288118Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c074b99f606a6eba6b937f3d96115ec5790b747f6c0b6f6eed01e4f1a3a189eb" id=62bb1220-afd8-4cf5-b549-28d754dc16d7 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:57 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:57.658605036Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba904bf53d6c9cd58209eebeead820a9fc257a3eef7e2301313cd33072c494dd,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c074b99f606a6eba6b937f3d96115ec5790b747f6c0b6f6eed01e4f1a3a189eb],Size_:546075839,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=62bb1220-afd8-4cf5-b549-28d754dc16d7 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:25:57 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:57.663521785Z" level=info msg="Creating container: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/kube-apiserver-insecure-readyz" id=fa6ce43d-7a9d-4155-8d97-702ba69c1d3f name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:25:57 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:57.666268462Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:25:57 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:57.733323 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" event=&{ID:1cb3be1f2df5273e9b77f7050777bcbe Type:ContainerStarted Data:adeb5fb43f729950b1ba8f87e5cdefec08733e2ba730cabf369a54b8fdf91fdc} Jan 16 21:25:58 api-int.lab.ocpipi.lan systemd[1]: Started crio-conmon-29d29dd73783eacfc2caa7661163931489eb876f743ca1f043dbc7455651136f.scope. Jan 16 21:25:58 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 29d29dd73783eacfc2caa7661163931489eb876f743ca1f043dbc7455651136f. Jan 16 21:25:58 api-int.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.aFG0w1.mount: Deactivated successfully. Jan 16 21:25:58 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:58.707530984Z" level=info msg="Created container 29d29dd73783eacfc2caa7661163931489eb876f743ca1f043dbc7455651136f: openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/kube-apiserver-insecure-readyz" id=fa6ce43d-7a9d-4155-8d97-702ba69c1d3f name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:25:58 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:58.709448002Z" level=info msg="Starting container: 29d29dd73783eacfc2caa7661163931489eb876f743ca1f043dbc7455651136f" id=0d64d7ee-e92e-4258-819a-aead8f3ad8ae name=/runtime.v1.RuntimeService/StartContainer Jan 16 21:25:58 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:25:58.753185143Z" level=info msg="Started container" PID=23465 containerID=29d29dd73783eacfc2caa7661163931489eb876f743ca1f043dbc7455651136f description=openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain/kube-apiserver-insecure-readyz id=0d64d7ee-e92e-4258-819a-aead8f3ad8ae name=/runtime.v1.RuntimeService/StartContainer sandboxID=c17de807e332ba031e33470df1ad799d4ee4b1f8bc6b7724534074d3ed9e5359 Jan 16 21:25:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:59.435580 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:59.441869 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:59.442280 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:59.442485 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:25:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:59.759126 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" event=&{ID:1cb3be1f2df5273e9b77f7050777bcbe Type:ContainerStarted Data:29d29dd73783eacfc2caa7661163931489eb876f743ca1f043dbc7455651136f} Jan 16 21:25:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:59.759696 2579 kubelet.go:2529] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:25:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:59.759905 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:25:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:59.762374 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:25:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:59.762568 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:25:59 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:25:59.762770 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:26:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:00.766452 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:26:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:00.777693 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:26:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:00.779402 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:26:00 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:00.780616 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:26:02 api-int.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:26:04 api-int.lab.ocpipi.lan systemd[1]: crio-f0081ab2100c5e9a7477538e5d8f42f7fe2c8977201da22b8ab342be078a3d1e.scope: Deactivated successfully. Jan 16 21:26:04 api-int.lab.ocpipi.lan systemd[1]: crio-f0081ab2100c5e9a7477538e5d8f42f7fe2c8977201da22b8ab342be078a3d1e.scope: Consumed 7.420s CPU time. Jan 16 21:26:04 api-int.lab.ocpipi.lan conmon[22960]: conmon f0081ab2100c5e9a7477 : container 22978 exited with status 1 Jan 16 21:26:04 api-int.lab.ocpipi.lan conmon[22960]: conmon f0081ab2100c5e9a7477 : Failed to open cgroups file: /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3db590e56a311b869092b2d6b1724e5.slice/crio-f0081ab2100c5e9a7477538e5d8f42f7fe2c8977201da22b8ab342be078a3d1e.scope/memory.events Jan 16 21:26:04 api-int.lab.ocpipi.lan systemd[1]: crio-conmon-f0081ab2100c5e9a7477538e5d8f42f7fe2c8977201da22b8ab342be078a3d1e.scope: Deactivated successfully. Jan 16 21:26:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:05.803391 2579 generic.go:334] "Generic (PLEG): container finished" podID=c3db590e56a311b869092b2d6b1724e5 containerID="f0081ab2100c5e9a7477538e5d8f42f7fe2c8977201da22b8ab342be078a3d1e" exitCode=1 Jan 16 21:26:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:05.804415 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" event=&{ID:c3db590e56a311b869092b2d6b1724e5 Type:ContainerDied Data:f0081ab2100c5e9a7477538e5d8f42f7fe2c8977201da22b8ab342be078a3d1e} Jan 16 21:26:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:05.805162 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:26:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:05.807441 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:26:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:05.807640 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:26:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:05.807861 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:26:05 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:05.808304 2579 scope.go:115] "RemoveContainer" containerID="f0081ab2100c5e9a7477538e5d8f42f7fe2c8977201da22b8ab342be078a3d1e" Jan 16 21:26:05 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:26:05.810667162Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=a0e311fa-e854-4e90-9dae-4694b36bb089 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:26:05 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:26:05.811641095Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=a0e311fa-e854-4e90-9dae-4694b36bb089 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:26:05 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:26:05.813214536Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=051ff22a-9121-4443-89b0-c82a97b3fbcc name=/runtime.v1.ImageService/ImageStatus Jan 16 21:26:05 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:26:05.813532251Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=051ff22a-9121-4443-89b0-c82a97b3fbcc name=/runtime.v1.ImageService/ImageStatus Jan 16 21:26:05 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:26:05.816168495Z" level=info msg="Creating container: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/kube-controller-manager" id=0fcdda14-79c1-46b9-b0f2-67bfa01080be name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:26:05 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:26:05.816569536Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:26:06 api-int.lab.ocpipi.lan systemd[1]: Started crio-conmon-9b139345ab34148f36dbd0a482b71700ffa8e4f86c6d3220206f52b19b0be427.scope. Jan 16 21:26:06 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container 9b139345ab34148f36dbd0a482b71700ffa8e4f86c6d3220206f52b19b0be427. Jan 16 21:26:06 api-int.lab.ocpipi.lan conmon[22941]: conmon ee5caea6024f9ae3c4f5 : container 22954 exited with status 1 Jan 16 21:26:06 api-int.lab.ocpipi.lan systemd[1]: crio-ee5caea6024f9ae3c4f59f9f40d4339c62ef96606b5351eed3dfffb489236f21.scope: Deactivated successfully. Jan 16 21:26:06 api-int.lab.ocpipi.lan systemd[1]: crio-ee5caea6024f9ae3c4f59f9f40d4339c62ef96606b5351eed3dfffb489236f21.scope: Consumed 2.910s CPU time. Jan 16 21:26:06 api-int.lab.ocpipi.lan systemd[1]: crio-conmon-ee5caea6024f9ae3c4f59f9f40d4339c62ef96606b5351eed3dfffb489236f21.scope: Deactivated successfully. Jan 16 21:26:06 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:26:06.394523343Z" level=info msg="Created container 9b139345ab34148f36dbd0a482b71700ffa8e4f86c6d3220206f52b19b0be427: kube-system/bootstrap-kube-controller-manager-localhost.localdomain/kube-controller-manager" id=0fcdda14-79c1-46b9-b0f2-67bfa01080be name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:26:06 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:26:06.396190374Z" level=info msg="Starting container: 9b139345ab34148f36dbd0a482b71700ffa8e4f86c6d3220206f52b19b0be427" id=539321ca-44f0-483f-8864-1727e136c766 name=/runtime.v1.RuntimeService/StartContainer Jan 16 21:26:06 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:26:06.440141098Z" level=info msg="Started container" PID=23546 containerID=9b139345ab34148f36dbd0a482b71700ffa8e4f86c6d3220206f52b19b0be427 description=kube-system/bootstrap-kube-controller-manager-localhost.localdomain/kube-controller-manager id=539321ca-44f0-483f-8864-1727e136c766 name=/runtime.v1.RuntimeService/StartContainer sandboxID=84a9a9fdb935fdf56b1aa2684295dfb71d8adb68ba31b934ad4cad7e6c1a23d6 Jan 16 21:26:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:06.812677 2579 generic.go:334] "Generic (PLEG): container finished" podID=b8b0f2012ce2b145220be181d7a5aa55 containerID="ee5caea6024f9ae3c4f59f9f40d4339c62ef96606b5351eed3dfffb489236f21" exitCode=1 Jan 16 21:26:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:06.813141 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" event=&{ID:b8b0f2012ce2b145220be181d7a5aa55 Type:ContainerDied Data:ee5caea6024f9ae3c4f59f9f40d4339c62ef96606b5351eed3dfffb489236f21} Jan 16 21:26:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:06.813634 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:26:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:06.815417 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:26:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:06.815467 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:26:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:06.815494 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:26:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:06.815641 2579 scope.go:115] "RemoveContainer" containerID="ee5caea6024f9ae3c4f59f9f40d4339c62ef96606b5351eed3dfffb489236f21" Jan 16 21:26:06 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:26:06.816576427Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=538475a8-2ec7-4420-b27c-910f8ec2878f name=/runtime.v1.ImageService/ImageStatus Jan 16 21:26:06 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:26:06.817251423Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=538475a8-2ec7-4420-b27c-910f8ec2878f name=/runtime.v1.ImageService/ImageStatus Jan 16 21:26:06 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:26:06.818271204Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1" id=e08230a9-a5e0-452d-a7a0-df869b4385f7 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:26:06 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:26:06.818732537Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:23795a905b7aea920205e53b9381ee82c3436ea79aed30cfc4ca7ab60d9253ff,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8082bdbe2714b943ac7b6420c75ba21d2f72fe66f84a75a63b52014a22cb7ac1],Size_:1018437235,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=e08230a9-a5e0-452d-a7a0-df869b4385f7 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:26:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:06.822327 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" event=&{ID:c3db590e56a311b869092b2d6b1724e5 Type:ContainerStarted Data:9b139345ab34148f36dbd0a482b71700ffa8e4f86c6d3220206f52b19b0be427} Jan 16 21:26:06 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:26:06.822306660Z" level=info msg="Creating container: kube-system/bootstrap-kube-scheduler-localhost.localdomain/kube-scheduler" id=7fa8300b-6a6b-47fc-b8e3-92e96051b9cc name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:26:06 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:26:06.822551805Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 16 21:26:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:06.823516 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:26:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:06.825662 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:26:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:06.825792 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:26:06 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:06.825881 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:26:07 api-int.lab.ocpipi.lan systemd[1]: Started crio-conmon-d5913b43ce61523898925233214efbc4035444a6670073d31b08398fffdc8341.scope. Jan 16 21:26:07 api-int.lab.ocpipi.lan systemd[1]: run-runc-d5913b43ce61523898925233214efbc4035444a6670073d31b08398fffdc8341-runc.eoJDhn.mount: Deactivated successfully. Jan 16 21:26:07 api-int.lab.ocpipi.lan systemd[1]: Started libcontainer container d5913b43ce61523898925233214efbc4035444a6670073d31b08398fffdc8341. Jan 16 21:26:07 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:26:07.504603923Z" level=info msg="Created container d5913b43ce61523898925233214efbc4035444a6670073d31b08398fffdc8341: kube-system/bootstrap-kube-scheduler-localhost.localdomain/kube-scheduler" id=7fa8300b-6a6b-47fc-b8e3-92e96051b9cc name=/runtime.v1.RuntimeService/CreateContainer Jan 16 21:26:07 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:26:07.506062647Z" level=info msg="Starting container: d5913b43ce61523898925233214efbc4035444a6670073d31b08398fffdc8341" id=e3fa1f1f-9756-4706-ac7c-b7ef117f2b03 name=/runtime.v1.RuntimeService/StartContainer Jan 16 21:26:07 api-int.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:26:07.540043640Z" level=info msg="Started container" PID=23603 containerID=d5913b43ce61523898925233214efbc4035444a6670073d31b08398fffdc8341 description=kube-system/bootstrap-kube-scheduler-localhost.localdomain/kube-scheduler id=e3fa1f1f-9756-4706-ac7c-b7ef117f2b03 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8b1cd37808edbb707d3022fa2253d889b3f4d83b84195201205430bd08259063 Jan 16 21:26:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:07.830136 2579 kubelet.go:2457] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" event=&{ID:b8b0f2012ce2b145220be181d7a5aa55 Type:ContainerStarted Data:d5913b43ce61523898925233214efbc4035444a6670073d31b08398fffdc8341} Jan 16 21:26:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:07.830512 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:26:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:07.832892 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:26:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:07.833031 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:26:07 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:07.833058 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:26:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:09.477289 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:26:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:09.480422 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:26:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:09.480523 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:26:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:09.480556 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:26:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:09.974540 2579 kubelet.go:2529] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:26:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:09.975409 2579 kubelet.go:2529] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:26:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:09.975761 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:26:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:09.978685 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:26:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:09.978867 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:26:09 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:09.978907 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:26:10 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:10.539309 2579 kubelet.go:2529] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:26:10 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:10.844253 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:26:10 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:10.846793 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:26:10 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:10.847049 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:26:10 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:10.847078 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:26:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:11.847900 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:26:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:11.851152 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:26:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:11.851414 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:26:11 api-int.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:11.851587 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:26:13 api-int.lab.ocpipi.lan NetworkManager[1706]: [1705440373.9787] policy: set-hostname: set hostname to 'api.lab.ocpipi.lan' (from address lookup) Jan 16 21:26:14 api-int.lab.ocpipi.lan systemd[1]: Starting Hostname Service... Jan 16 21:26:14 api-int.lab.ocpipi.lan approve-csr.sh[23659]: No resources found Jan 16 21:26:14 api-int.lab.ocpipi.lan systemd[1]: Started Hostname Service. Jan 16 21:26:14 api.lab.ocpipi.lan systemd-hostnamed[23674]: Hostname set to (transient) Jan 16 21:26:14 api.lab.ocpipi.lan systemd[1]: Starting Network Manager Script Dispatcher Service... Jan 16 21:26:14 api.lab.ocpipi.lan systemd[1]: Started Network Manager Script Dispatcher Service. Jan 16 21:26:14 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:14.909291 2579 kubelet.go:2529] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" Jan 16 21:26:14 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:14.910408 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:26:14 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:14.914117 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:26:14 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:14.914301 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:26:14 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:14.914328 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:26:19 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:19.468384 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:26:19 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:19.480404 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:26:19 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:19.480558 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:26:19 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:19.480613 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:26:19 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:19.521913 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:26:19 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:19.531550 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:26:19 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:19.531907 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:26:19 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:19.532125 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:26:20 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:20.009160 2579 kubelet.go:2529] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" Jan 16 21:26:20 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:20.010304 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:26:20 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:20.014494 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:26:20 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:20.014729 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:26:20 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:20.014780 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:26:22 api.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:26:24 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:24.467431 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:26:24 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:24.474288 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:26:24 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:24.475236 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:26:24 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:24.476060 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:26:24 api.lab.ocpipi.lan systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Jan 16 21:26:29 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:29.466208 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:26:29 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:29.471033 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:26:29 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:29.471304 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:26:29 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:29.471505 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:26:29 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:29.639159 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:26:29 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:29.642673 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:26:29 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:29.643093 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:26:29 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:29.643299 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:26:31 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:31.466047 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:26:31 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:31.470332 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:26:31 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:31.470478 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:26:31 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:31.470513 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:26:34 api.lab.ocpipi.lan approve-csr.sh[23752]: No resources found Jan 16 21:26:39 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:39.690300 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:26:39 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:39.698174 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:26:39 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:39.698466 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:26:39 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:39.698532 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:26:43 api.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:26:43 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:43.855172 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 21:26:43 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:43.855598 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 21:26:43 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:43.855699 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 21:26:43 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:43.855758 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 21:26:43 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:43.856089 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 21:26:43 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:43.856163 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 21:26:43 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:43.856229 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 21:26:43 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:43.856279 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 21:26:43 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:43.856351 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 21:26:44 api.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:26:44.692630608Z" level=info msg="Stopping pod sandbox: c6fca5c97022384178f10593b3c69027ffc4f49d245087f693d7ec56d9af4cc6" id=6e85830a-105e-42c9-8c5b-57f2fcca109e name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:26:44 api.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:26:44.694383730Z" level=info msg="Stopped pod sandbox (already stopped): c6fca5c97022384178f10593b3c69027ffc4f49d245087f693d7ec56d9af4cc6" id=6e85830a-105e-42c9-8c5b-57f2fcca109e name=/runtime.v1.RuntimeService/StopPodSandbox Jan 16 21:26:44 api.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:26:44.696369411Z" level=info msg="Removing pod sandbox: c6fca5c97022384178f10593b3c69027ffc4f49d245087f693d7ec56d9af4cc6" id=b7c9ac07-52de-4aef-a9b2-8c02bc632cde name=/runtime.v1.RuntimeService/RemovePodSandbox Jan 16 21:26:44 api.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:26:44.725888554Z" level=info msg="Removed pod sandbox: c6fca5c97022384178f10593b3c69027ffc4f49d245087f693d7ec56d9af4cc6" id=b7c9ac07-52de-4aef-a9b2-8c02bc632cde name=/runtime.v1.RuntimeService/RemovePodSandbox Jan 16 21:26:44 api.lab.ocpipi.lan systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 16 21:26:46 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:46.467595 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:26:46 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:46.480437 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:26:46 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:46.480787 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:26:46 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:46.481101 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:26:48 api.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.P7TnGG.mount: Deactivated successfully. Jan 16 21:26:49 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:49.800172 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:26:49 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:49.806771 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:26:49 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:49.807287 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:26:49 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:49.807348 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:26:53 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:53.466708 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:26:53 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:53.476263 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:26:53 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:53.476382 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:26:53 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:53.476431 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:26:55 api.lab.ocpipi.lan approve-csr.sh[23836]: No resources found Jan 16 21:26:59 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:59.881447 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:26:59 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:59.891281 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:26:59 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:59.892302 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:26:59 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:26:59.892702 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:27:03 api.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:27:08 api.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.uxMusG.mount: Deactivated successfully. Jan 16 21:27:09 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:09.983123 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:27:09 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:09.990156 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:27:09 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:09.990384 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:27:09 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:09.990464 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:27:16 api.lab.ocpipi.lan approve-csr.sh[23915]: No resources found Jan 16 21:27:20 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:20.097305 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:27:20 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:20.107168 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:27:20 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:20.107411 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:27:20 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:20.107472 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:27:23 api.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:27:23 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:23.467745 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:27:23 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:23.474140 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:27:23 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:23.474321 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:27:23 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:23.474400 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:27:26 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:26.467550 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:27:26 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:26.475498 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:27:26 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:26.475664 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:27:26 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:26.475721 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:27:30 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:30.180461 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:27:30 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:30.183745 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:27:30 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:30.183987 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:27:30 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:30.184020 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:27:31 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:31.467412 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:27:31 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:31.473764 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:27:31 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:31.474277 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:27:31 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:31.474340 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:27:32 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:32.466497 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:27:32 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:32.472429 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:27:32 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:32.474470 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:27:32 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:32.475087 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:27:33 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:33.466905 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:27:33 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:33.474719 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:27:33 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:33.475098 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:27:33 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:33.475165 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:27:37 api.lab.ocpipi.lan approve-csr.sh[23998]: No resources found Jan 16 21:27:40 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:40.220460 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:27:40 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:40.226430 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:27:40 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:40.226654 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:27:40 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:40.226719 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:27:43 api.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:27:43 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:43.857685 2579 kubelet_getters.go:187] "Pod status updated" pod="default/bootstrap-machine-config-operator-localhost.localdomain" status=Running Jan 16 21:27:43 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:43.859533 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/coredns-localhost.localdomain" status=Running Jan 16 21:27:43 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:43.859670 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cloud-credential-operator/cloud-credential-operator-localhost.localdomain" status=Running Jan 16 21:27:43 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:43.860259 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-controller-manager-localhost.localdomain" status=Running Jan 16 21:27:43 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:43.860391 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-bootstrap-member-localhost.localdomain" status=Running Jan 16 21:27:43 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:43.860461 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kni-infra/keepalived-localhost.localdomain" status=Running Jan 16 21:27:43 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:43.860527 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-cluster-version/bootstrap-cluster-version-operator-localhost.localdomain" status=Running Jan 16 21:27:43 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:43.860576 2579 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-localhost.localdomain" status=Running Jan 16 21:27:43 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:43.860644 2579 kubelet_getters.go:187] "Pod status updated" pod="kube-system/bootstrap-kube-scheduler-localhost.localdomain" status=Running Jan 16 21:27:43 api.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:27:43.988762189Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcc1d762ed74e1eb6027355a2e6cc3933bd7b35cee9d6235de0fbe2d2958b0c2" id=11bca772-74de-4fa7-b551-0337724cec81 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:27:43 api.lab.ocpipi.lan crio[2304]: time="2024-01-16 21:27:43.991672351Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a5beb712367dd5020b5a7b99c2ffbfcd91d3c6c425625d5cc816f58cf145564f,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcc1d762ed74e1eb6027355a2e6cc3933bd7b35cee9d6235de0fbe2d2958b0c2],Size_:448590957,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=11bca772-74de-4fa7-b551-0337724cec81 name=/runtime.v1.ImageService/ImageStatus Jan 16 21:27:44 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:44.466415 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:27:44 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:44.475089 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:27:44 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:44.476886 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:27:44 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:44.477274 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:27:49 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:49.466316 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:27:49 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:49.471556 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:27:49 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:49.472061 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:27:49 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:49.472132 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:27:50 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:50.320547 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:27:50 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:50.329410 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:27:50 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:50.329748 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:27:50 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:27:50.330153 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:27:58 api.lab.ocpipi.lan approve-csr.sh[24077]: No resources found Jan 16 21:28:00 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:28:00.400390 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:28:00 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:28:00.406746 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:28:00 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:28:00.407394 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:28:00 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:28:00.407458 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:28:03 api.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:28:07 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:28:07.467382 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:28:07 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:28:07.473350 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:28:07 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:28:07.473453 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:28:07 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:28:07.473536 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:28:08 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:28:08.468271 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:28:08 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:28:08.474335 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:28:08 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:28:08.474538 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:28:08 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:28:08.474595 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:28:10 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:28:10.479729 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:28:10 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:28:10.487314 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:28:10 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:28:10.487530 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:28:10 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:28:10.487600 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:28:18 api.lab.ocpipi.lan systemd[1]: run-runc-c55440247c574f2fe832b15970116650f273bce7dc15db68b7dffedbaac07e0d-runc.StYaVW.mount: Deactivated successfully. Jan 16 21:28:18 api.lab.ocpipi.lan approve-csr.sh[24157]: No resources found Jan 16 21:28:20 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:28:20.579640 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:28:20 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:28:20.588564 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:28:20 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:28:20.588672 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:28:20 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:28:20.588720 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID" Jan 16 21:28:23 api.lab.ocpipi.lan master-bmh-update.sh[6528]: waiting for a master node to show up Jan 16 21:28:27 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:28:27.466416 2579 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 16 21:28:27 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:28:27.473346 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientMemory" Jan 16 21:28:27 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:28:27.474086 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasNoDiskPressure" Jan 16 21:28:27 api.lab.ocpipi.lan kubelet.sh[2579]: I0116 21:28:27.474491 2579 kubelet_node_status.go:696] "Recording event message for node" node="localhost.localdomain" event="NodeHasSufficientPID"