-
Notifications
You must be signed in to change notification settings - Fork 2.9k
Description
Issue Description
Containers with RestartPolicy=unless-stopped are never started.
Steps to reproduce the issue
- Use docker to start a few containers with "restart: unless-stopped" configuration.
- Reboot the system with the podman-restart.service daemon enabled.
Describe the results you received
All containers with the "unless-stopped" configuration are not restarted by podman-restart.service.
Describe the results you expected
All running containers with a "unless-stopped" configuration get restarted by podman-restart.service upon system boot.
podman info output
host:
arch: arm64
buildahVersion: 1.32.0
cgroupControllers:
- cpuset
- cpu
- io
- memory
- pids
- rdma
- misc
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.1.7-2.fc38.aarch64
path: /usr/bin/conmon
version: 'conmon version 2.1.7, commit: '
cpuUtilization:
idlePercent: 98.56
systemPercent: 0.5
userPercent: 0.93
cpus: 4
databaseBackend: boltdb
distribution:
distribution: fedora
variant: coreos
version: "38"
eventLogger: journald
freeLocks: 2038
hostname: localhost.localdomain
idMappings:
gidmap: null
uidmap: null
kernel: 6.5.6-200.fc38.aarch64
linkmode: dynamic
logDriver: journald
memFree: 1659879424
memTotal: 4086497280
networkBackend: netavark
networkBackendInfo:
backend: netavark
dns:
package: aardvark-dns-1.8.0-1.fc38.aarch64
path: /usr/libexec/podman/aardvark-dns
version: aardvark-dns 1.8.0
package: netavark-1.8.0-2.fc38.aarch64
path: /usr/libexec/podman/netavark
version: netavark 1.8.0
ociRuntime:
name: crun
package: crun-1.9.2-1.fc38.aarch64
path: /usr/bin/crun
version: |-
crun version 1.9.2
commit: 35274d346d2e9ffeacb22cc11590b0266a23d634
rundir: /run/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
os: linux
pasta:
executable: /usr/bin/pasta
package: passt-0^20231004.gf851084-1.fc38.aarch64
version: |
pasta 0^20231004.gf851084-1.fc38.aarch64-pasta
Copyright Red Hat
GNU General Public License, version 2 or later
<https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
remoteSocket:
exists: true
path: /run/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: false
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: true
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.2.1-1.fc38.aarch64
version: |-
slirp4netns version 1.2.1
commit: 09e31e92fa3d2a1d3ca261adaeb012c8d75a8194
libslirp: 4.7.0
SLIRP_CONFIG_VERSION_MAX: 4
libseccomp: 2.5.3
swapFree: 0
swapTotal: 0
uptime: 1h 35m 16.00s (Approximately 0.04 days)
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- docker.io
store:
configFile: /usr/share/containers/storage.conf
containerStore:
number: 7
paused: 0
running: 3
stopped: 4
graphDriverName: overlay
graphOptions:
overlay.mountopt: nodev,metacopy=on
graphRoot: /var/lib/containers/storage
graphRootAllocated: 106769133568
graphRootUsed: 6182490112
graphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "false"
Supports d_type: "true"
Supports shifting: "true"
Supports volatile: "true"
Using metacopy: "true"
imageCopyTmpDir: /var/tmp
imageStore:
number: 5
runRoot: /run/containers/storage
transientStore: false
volumePath: /var/lib/containers/storage/volumes
version:
APIVersion: 4.7.0
Built: 1695839065
BuiltTime: Wed Sep 27 18:24:25 2023
GitCommit: ""
GoVersion: go1.20.8
Os: linux
OsArch: linux/arm64
Version: 4.7.0Podman in a container
No
Privileged Or Rootless
Privileged
Upstream Latest Release
Yes
Additional environment details
Running on MacOS via homebrew.
Additional information
#17851 and #17580 have also reported this & been closed, but this is still happening.
This is quite painful, because always containers are always restarted. Many devs in my org will have multiple copies of, for example, a postgres database container. We expect to be able to stop most of them, but have one started, and have that one and only that one resume when podman machine starts up after reboot.
But currently, with podman-restart, all always containers will start, and the multiple postgres containers will all try to grab the same 5432 port.
There were some asks in #10539 about how to implement unless-stopped, but no answers were found there. It seems like we need to store more state than we have available (or than I know how to access) to properly implement this capability: we have to know what containers were running, which were stopped, as podman shuts down/system reboots.