Open
Description
Hi, I am following the steps in the repo to build a cluster with docker images. I have the first node up running and able to connect to the database. But when I add the second node to the cluster I came across the error. Here is the output and other logs:
docker create -t -i \
> --hostname racnode2 \
> --volume /dev/shm \
> --tmpfs /dev/shm:rw,exec,size=4G \
> --volume /boot:/boot:ro \
> --dns-search=example.com \
> --volume /opt/containers/rac_host_file:/etc/hosts \
> --volume /opt/.secrets:/run/secrets:ro \
> --dns=172.16.1.25 \
> --dns-search=example.com \
> --privileged=false \
> --volume racstorage:/oradata \
> --cap-add=SYS_NICE \
> --cap-add=SYS_RESOURCE \
> --cap-add=NET_ADMIN \
> -e DNS_SERVERS="172.16.1.25" \
> -e EXISTING_CLS_NODES=racnode1 \
> -e NODE_VIP=172.16.1.161 \
> -e VIP_HOSTNAME=racnode2-vip \
> -e PRIV_IP=192.168.17.151 \
> -e PRIV_HOSTNAME=racnode2-priv \
> -e PUBLIC_IP=172.16.1.151 \
> -e PUBLIC_HOSTNAME=racnode2 \
> -e DOMAIN=example.com \
> -e SCAN_NAME=racnode-scan \
> -e ASM_DISCOVERY_DIR=/oradata \
> -e ASM_DEVICE_LIST=/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img \
> -e ORACLE_SID=ORCLCDB \
> -e OP_TYPE=ADDNODE \
> -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \
> -e PWD_KEY=pwd.key \
> --tmpfs=/run -v /sys/fs/cgroup:/sys/fs/cgroup:ro \
> --cpu-rt-runtime=95000 \
> --ulimit rtprio=99 \
> --restart=always \
> --name racnode2 \
> oracle/database-rac:21.3.0
6a1aca67d17b80c6b32a9110e686db8d7d3a96ed67437ce3a1577e0d56535fdf
[root@vm-oracle dockerfiles]#
[root@vm-oracle dockerfiles]#
[root@vm-oracle dockerfiles]# docker network disconnect bridge racnode2
[root@vm-oracle dockerfiles]# docker network connect rac_pub1_nw --ip 172.16.1.151 racnode2
[root@vm-oracle dockerfiles]#
[root@vm-oracle dockerfiles]#
[root@vm-oracle dockerfiles]#
[root@vm-oracle dockerfiles]# docker network connect rac_priv1_nw --ip 192.168.17.151 racnode2
[root@vm-oracle dockerfiles]# docker start racnode2
racnode2
[root@vm-oracle dockerfiles]# docker logs -f racnode2
PATH=/bin:/usr/bin:/sbin:/usr/sbin
HOSTNAME=racnode2
TERM=xterm
PRIV_IP=192.168.17.151
PUBLIC_IP=172.16.1.151
DNS_SERVERS=172.16.1.25
EXISTING_CLS_NODES=racnode1
DOMAIN=example.com
SCAN_NAME=racnode-scan
ASM_DEVICE_LIST=/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img
NODE_VIP=172.16.1.161
VIP_HOSTNAME=racnode2-vip
PRIV_HOSTNAME=racnode2-priv
COMMON_OS_PWD_FILE=common_os_pwdfile.enc
PUBLIC_HOSTNAME=racnode2
ASM_DISCOVERY_DIR=/oradata
ORACLE_SID=ORCLCDB
OP_TYPE=ADDNODE
PWD_KEY=pwd.key
SETUP_LINUX_FILE=setupLinuxEnv.sh
INSTALL_DIR=/opt/scripts
GRID_BASE=/u01/app/grid
GRID_HOME=/u01/app/21.3.0/grid
INSTALL_FILE_1=LINUX.X64_213000_grid_home.zip
GRID_INSTALL_RSP=gridsetup_21c.rsp
GRID_SW_INSTALL_RSP=grid_sw_install_21c.rsp
GRID_SETUP_FILE=setupGrid.sh
FIXUP_PREQ_FILE=fixupPreq.sh
INSTALL_GRID_BINARIES_FILE=installGridBinaries.sh
INSTALL_GRID_PATCH=applyGridPatch.sh
INVENTORY=/u01/app/oraInventory
CONFIGGRID=configGrid.sh
ADDNODE=AddNode.sh
DELNODE=DelNode.sh
ADDNODE_RSP=grid_addnode_21c.rsp
SETUPSSH=setupSSH.expect
DOCKERORACLEINIT=dockeroracleinit
GRID_USER_HOME=/home/grid
SETUPGRIDENV=setupGridEnv.sh
RESET_OS_PASSWORD=resetOSPassword.sh
MULTI_NODE_INSTALL=MultiNodeInstall.py
DB_BASE=/u01/app/oracle
DB_HOME=/u01/app/oracle/product/21.3.0/dbhome_1
INSTALL_FILE_2=LINUX.X64_213000_db_home.zip
DB_INSTALL_RSP=db_sw_install_21c.rsp
DBCA_RSP=dbca_21c.rsp
DB_SETUP_FILE=setupDB.sh
PWD_FILE=setPassword.sh
RUN_FILE=runOracle.sh
STOP_FILE=stopOracle.sh
ENABLE_RAC_FILE=enableRAC.sh
CHECK_DB_FILE=checkDBStatus.sh
USER_SCRIPTS_FILE=runUserScripts.sh
REMOTE_LISTENER_FILE=remoteListener.sh
INSTALL_DB_BINARIES_FILE=installDBBinaries.sh
GRID_HOME_CLEANUP=GridHomeCleanup.sh
ORACLE_HOME_CLEANUP=OracleHomeCleanup.sh
DB_USER=oracle
GRID_USER=grid
FUNCTIONS=functions.sh
COMMON_SCRIPTS=/common_scripts
CHECK_SPACE_FILE=checkSpace.sh
RESET_FAILED_UNITS=resetFailedUnits.sh
SET_CRONTAB=setCrontab.sh
CRONTAB_ENTRY=crontabEntry
EXPECT=/usr/bin/expect
BIN=/usr/sbin
container=true
INSTALL_SCRIPTS=/opt/scripts/install
SCRIPT_DIR=/opt/scripts/startup
GRID_PATH=/u01/app/21.3.0/grid/bin:/u01/app/21.3.0/grid/OPatch/:/u01/app/21.3.0/grid/perl/bin:/usr/sbin:/bin:/sbin
DB_PATH=/u01/app/oracle/product/21.3.0/dbhome_1/bin:/u01/app/oracle/product/21.3.0/dbhome_1/OPatch/:/u01/app/oracle/product/21.3.0/dbhome_1/perl/bin:/usr/sbin:/bin:/sbin
GRID_LD_LIBRARY_PATH=/u01/app/21.3.0/grid/lib:/usr/lib:/lib
DB_LD_LIBRARY_PATH=/u01/app/oracle/product/21.3.0/dbhome_1/lib:/usr/lib:/lib
HOME=/home/grid
Failed to parse kernel command line, ignoring: No such file or directory
systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN)
Detected virtualization other.
Detected architecture x86-64.
Welcome to Oracle Linux Server 7.9!
Set hostname to <racnode2>.
Failed to parse kernel command line, ignoring: No such file or directory
Failed to parse kernel command line, ignoring: No such file or directory
Failed to parse kernel command line, ignoring: No such file or directory
/usr/lib/systemd/system-generators/systemd-fstab-generator failed with error code 1.
[/usr/lib/systemd/system/systemd-pstore.service:22] Unknown lvalue 'StateDirectory' in section 'Service'
Cannot add dependency job for unit display-manager.service, ignoring: Unit not found.
[ OK ] Reached target Swap.
[ OK ] Reached target Local Encrypted Volumes.
[ OK ] Started Forward Password Requests to Wall Directory Watch.
[ OK ] Created slice Root Slice.
[ OK ] Created slice User and Session Slice.
[ OK ] Listening on Delayed Shutdown Socket.
[ OK ] Listening on Journal Socket.
[ OK ] Started Dispatch Password Requests to Console Directory Watch.
[ OK ] Created slice System Slice.
[ OK ] Reached target Slices.
Starting Read and set NIS domainname from /etc/sysconfig/network...
Couldn't determine result for ConditionKernelCommandLine=|rd.modules-load for systemd-modules-load.service, assuming failed: No such file or directory
Couldn't determine result for ConditionKernelCommandLine=|modules-load for systemd-modules-load.service, assuming failed: No such file or directory
[ OK ] Created slice system-getty.slice.
Starting Journal Service...
Starting Rebuild Hardware Database...
[ OK ] Reached target Local File Systems (Pre).
[ OK ] Listening on /dev/initctl Compatibility Named Pipe.
[ OK ] Reached target RPC Port Mapper.
Starting Configure read-only root support...
[ OK ] Started Read and set NIS domainname from /etc/sysconfig/network.
[ OK ] Started Journal Service.
Starting Flush Journal to Persistent Storage...
[ OK ] Started Configure read-only root support.
Starting Load/Save Random Seed...
[ OK ] Reached target Local File Systems.
Starting Preprocess NFS configuration...
Starting Rebuild Journal Catalog...
Starting Mark the need to relabel after reboot...
[ OK ] Started Flush Journal to Persistent Storage.
[ OK ] Started Load/Save Random Seed.
[ OK ] Started Mark the need to relabel after reboot.
Starting Create Volatile Files and Directories...
[ OK ] Started Preprocess NFS configuration.
[ OK ] Started Rebuild Journal Catalog.
[ OK ] Started Create Volatile Files and Directories.
Mounting RPC Pipe File System...
Starting Update UTMP about System Boot/Shutdown...
[FAILED] Failed to mount RPC Pipe File System.
See 'systemctl status var-lib-nfs-rpc_pipefs.mount' for details.
[DEPEND] Dependency failed for rpc_pipefs.target.
[DEPEND] Dependency failed for RPC security service for NFS client and server.
[ OK ] Started Update UTMP about System Boot/Shutdown.
[ OK ] Started Rebuild Hardware Database.
Starting Update is Completed...
[ OK ] Started Update is Completed.
[ OK ] Reached target System Initialization.
[ OK ] Started Daily Cleanup of Temporary Directories.
[ OK ] Reached target Timers.
[ OK ] Listening on D-Bus System Message Bus Socket.
[ OK ] Listening on RPCbind Server Activation Socket.
[ OK ] Reached target Sockets.
Starting RPC bind service...
[ OK ] Started Flexible branding.
[ OK ] Reached target Paths.
[ OK ] Reached target Basic System.
Starting OpenSSH Server Key Generation...
[ OK ] Started D-Bus System Message Bus.
Starting Resets System Activity Logs...
Starting GSSAPI Proxy Daemon...
Starting LSB: Bring up/down networking...
Starting Login Service...
Starting Self Monitoring and Reporting Technology (SMART) Daemon...
[ OK ] Started RPC bind service.
Starting Cleanup of Temporary Directories...
[ OK ] Started Resets System Activity Logs.
[ OK ] Started Login Service.
[ OK ] Started Cleanup of Temporary Directories.
[ OK ] Started GSSAPI Proxy Daemon.
[ OK ] Reached target NFS client services.
[ OK ] Reached target Remote File Systems (Pre).
[ OK ] Reached target Remote File Systems.
Starting Permit User Sessions...
[ OK ] Started Permit User Sessions.
[ OK ] Started Command Scheduler.
[ OK ] Started LSB: Bring up/down networking.
[ OK ] Reached target Network.
Starting /etc/rc.d/rc.local Compatibility...
[ OK ] Reached target Network is Online.
Starting Notify NFS peers of a restart...
[ OK ] Started /etc/rc.d/rc.local Compatibility.
[ OK ] Started Console Getty.
[ OK ] Reached target Login Prompts.
[ OK ] Started Notify NFS peers of a restart.
02-28-2022 17:24:35 UTC : : Process id of the program :
02-28-2022 17:24:35 UTC : : #################################################
02-28-2022 17:24:35 UTC : : Starting Grid Installation
02-28-2022 17:24:35 UTC : : #################################################
02-28-2022 17:24:35 UTC : : Pre-Grid Setup steps are in process
02-28-2022 17:24:35 UTC : : Process id of the program :
[ OK ] Started OpenSSH Server Key Generation.
Starting OpenSSH server daemon...
[ OK ] Started OpenSSH server daemon.
02-28-2022 17:24:35 UTC : : Disable failed service var-lib-nfs-rpc_pipefs.mount
Failed to parse kernel command line, ignoring: No such file or directory
Failed to parse kernel command line, ignoring: No such file or directory
Failed to parse kernel command line, ignoring: No such file or directory
02-28-2022 17:24:35 UTC : : Resetting Failed Services
02-28-2022 17:24:35 UTC : : Sleeping for 60 seconds
[ OK ] Started Self Monitoring and Reporting Technology (SMART) Daemon.
[ OK ] Reached target Multi-User System.
[ OK ] Reached target Graphical Interface.
Starting Update UTMP about System Runlevel Changes...
[ OK ] Started Update UTMP about System Runlevel Changes.
Oracle Linux Server 7.9
Kernel 5.4.17-2136.304.4.1.el7uek.x86_64 on an x86_64
racnode2 login: 02-28-2022 17:25:35 UTC : : Systemctl state is running!
02-28-2022 17:25:35 UTC : : Setting correct permissions for /bin/ping
02-28-2022 17:25:35 UTC : : Public IP is set to 172.16.1.151
02-28-2022 17:25:35 UTC : : RAC Node PUBLIC Hostname is set to racnode2
02-28-2022 17:25:35 UTC : : Preparing host line for racnode2
02-28-2022 17:25:35 UTC : : Adding \n172.16.1.151\tracnode2.example.com\tracnode2 to /etc/hosts
02-28-2022 17:25:35 UTC : : Preparing host line for racnode2-priv
02-28-2022 17:25:35 UTC : : Adding \n192.168.17.151\tracnode2-priv.example.com\tracnode2-priv to /etc/hosts
02-28-2022 17:25:35 UTC : : Preparing host line for racnode2-vip
02-28-2022 17:25:35 UTC : : Adding \n172.16.1.161\tracnode2-vip.example.com\tracnode2-vip to /etc/hosts
02-28-2022 17:25:35 UTC : : Preparing host line for racnode-scan
02-28-2022 17:25:35 UTC : : Preapring Device list
02-28-2022 17:25:35 UTC : : Changing Disk permission and ownership /oradata/asm_disk01.img
02-28-2022 17:25:35 UTC : : Changing Disk permission and ownership /oradata/asm_disk02.img
02-28-2022 17:25:35 UTC : : Changing Disk permission and ownership /oradata/asm_disk03.img
02-28-2022 17:25:35 UTC : : Changing Disk permission and ownership /oradata/asm_disk04.img
02-28-2022 17:25:35 UTC : : Changing Disk permission and ownership /oradata/asm_disk05.img
02-28-2022 17:25:36 UTC : : Preapring Dns Servers list
02-28-2022 17:25:36 UTC : : Setting DNS Servers
02-28-2022 17:25:36 UTC : : Adding nameserver 172.16.1.25 in /etc/resolv.conf.
02-28-2022 17:25:36 UTC : : #####################################################################
02-28-2022 17:25:36 UTC : : RAC setup will begin in 2 minutes
02-28-2022 17:25:36 UTC : : ####################################################################
02-28-2022 17:26:06 UTC : : ###################################################
02-28-2022 17:26:06 UTC : : Pre-Grid Setup steps completed
02-28-2022 17:26:06 UTC : : ###################################################
02-28-2022 17:26:06 UTC : : Checking if grid is already configured
02-28-2022 17:26:06 UTC : : Public IP is set to 172.16.1.151
02-28-2022 17:26:06 UTC : : RAC Node PUBLIC Hostname is set to racnode2
02-28-2022 17:26:06 UTC : : Domain is defined to example.com
02-28-2022 17:26:06 UTC : : Setting Existing Cluster Node for node addition operation. This will be retrieved from racnode1
02-28-2022 17:26:06 UTC : : Existing Node Name of the cluster is set to racnode1
02-28-2022 17:26:06 UTC : : 172.16.1.150
02-28-2022 17:26:06 UTC : : Existing Cluster node resolved to IP. Check passed
02-28-2022 17:26:06 UTC : : Default setting of AUTO GNS VIP set to false. If you want to use AUTO GNS VIP, please pass DHCP_CONF as an env parameter set to true
02-28-2022 17:26:06 UTC : : RAC VIP set to 172.16.1.161
02-28-2022 17:26:06 UTC : : RAC Node VIP hostname is set to racnode2-vip
02-28-2022 17:26:06 UTC : : SCAN_NAME name is racnode-scan
02-28-2022 17:26:06 UTC : : 172.16.1.172
172.16.1.171
172.16.1.170
02-28-2022 17:26:06 UTC : : SCAN Name resolving to IP. Check Passed!
02-28-2022 17:26:06 UTC : : SCAN_IP set to the empty string
02-28-2022 17:26:06 UTC : : RAC Node PRIV IP is set to 192.168.17.151
02-28-2022 17:26:06 UTC : : RAC Node private hostname is set to racnode2-priv
02-28-2022 17:26:06 UTC : : CMAN_NAME set to the empty string
02-28-2022 17:26:06 UTC : : CMAN_IP set to the empty string
02-28-2022 17:26:06 UTC : : Password file generated
02-28-2022 17:26:06 UTC : : Common OS Password string is set for Grid user
02-28-2022 17:26:06 UTC : : Common OS Password string is set for Oracle user
02-28-2022 17:26:06 UTC : : GRID_RESPONSE_FILE env variable set to empty. AddNode.sh will use standard cluster responsefile
02-28-2022 17:26:06 UTC : : Location for User script SCRIPT_ROOT set to /common_scripts
02-28-2022 17:26:06 UTC : : ORACLE_SID is set to ORCLCDB
02-28-2022 17:26:06 UTC : : Setting random password for root/grid/oracle user
02-28-2022 17:26:06 UTC : : Setting random password for grid user
02-28-2022 17:26:06 UTC : : Setting random password for oracle user
02-28-2022 17:26:06 UTC : : Setting random password for root user
02-28-2022 17:26:06 UTC : : Cluster Nodes are racnode1 racnode2
02-28-2022 17:26:06 UTC : : Running SSH setup for grid user between nodes racnode1 racnode2
02-28-2022 17:26:17 UTC : : Running SSH setup for oracle user between nodes racnode1 racnode2
02-28-2022 17:26:27 UTC : : SSH check fine for the racnode1
02-28-2022 17:26:28 UTC : : SSH check fine for the racnode2
02-28-2022 17:26:28 UTC : : SSH check fine for the racnode2
02-28-2022 17:26:28 UTC : : SSH check fine for the oracle@racnode1
02-28-2022 17:26:28 UTC : : SSH check fine for the oracle@racnode2
02-28-2022 17:26:28 UTC : : SSH check fine for the oracle@racnode2
02-28-2022 17:26:28 UTC : : Setting Device permission to grid and asmadmin on all the cluster nodes
02-28-2022 17:26:28 UTC : : Nodes in the cluster racnode2
02-28-2022 17:26:28 UTC : : Setting Device permissions for RAC Install on racnode2
02-28-2022 17:26:28 UTC : : Preapring ASM Device list
02-28-2022 17:26:28 UTC : : Changing Disk permission and ownership
02-28-2022 17:26:28 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chown $GRID_USER:asmadmin $device" execute on racnode2
02-28-2022 17:26:28 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode2
02-28-2022 17:26:28 UTC : : Populate Rac Env Vars on Remote Hosts
02-28-2022 17:26:28 UTC : : Command : su - $GRID_USER -c "ssh $node sudo echo \"export ASM_DEVICE_LIST=${ASM_DEVICE_LIST}\" >> /etc/rac_env_vars" execute on racnode2
02-28-2022 17:26:28 UTC : : Changing Disk permission and ownership
02-28-2022 17:26:28 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chown $GRID_USER:asmadmin $device" execute on racnode2
02-28-2022 17:26:28 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode2
02-28-2022 17:26:29 UTC : : Populate Rac Env Vars on Remote Hosts
02-28-2022 17:26:29 UTC : : Command : su - $GRID_USER -c "ssh $node sudo echo \"export ASM_DEVICE_LIST=${ASM_DEVICE_LIST}\" >> /etc/rac_env_vars" execute on racnode2
02-28-2022 17:26:29 UTC : : Changing Disk permission and ownership
02-28-2022 17:26:29 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chown $GRID_USER:asmadmin $device" execute on racnode2
02-28-2022 17:26:29 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode2
02-28-2022 17:26:29 UTC : : Populate Rac Env Vars on Remote Hosts
02-28-2022 17:26:29 UTC : : Command : su - $GRID_USER -c "ssh $node sudo echo \"export ASM_DEVICE_LIST=${ASM_DEVICE_LIST}\" >> /etc/rac_env_vars" execute on racnode2
02-28-2022 17:26:29 UTC : : Changing Disk permission and ownership
02-28-2022 17:26:29 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chown $GRID_USER:asmadmin $device" execute on racnode2
02-28-2022 17:26:29 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode2
02-28-2022 17:26:29 UTC : : Populate Rac Env Vars on Remote Hosts
02-28-2022 17:26:29 UTC : : Command : su - $GRID_USER -c "ssh $node sudo echo \"export ASM_DEVICE_LIST=${ASM_DEVICE_LIST}\" >> /etc/rac_env_vars" execute on racnode2
02-28-2022 17:26:29 UTC : : Changing Disk permission and ownership
02-28-2022 17:26:29 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chown $GRID_USER:asmadmin $device" execute on racnode2
02-28-2022 17:26:29 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode2
02-28-2022 17:26:29 UTC : : Populate Rac Env Vars on Remote Hosts
02-28-2022 17:26:29 UTC : : Command : su - $GRID_USER -c "ssh $node sudo echo \"export ASM_DEVICE_LIST=${ASM_DEVICE_LIST}\" >> /etc/rac_env_vars" execute on racnode2
02-28-2022 17:26:29 UTC : : Checking Cluster Status on racnode1
02-28-2022 17:26:29 UTC : : Checking Cluster
02-28-2022 17:26:30 UTC : : Cluster Check on remote node passed
02-28-2022 17:26:30 UTC : : Cluster Check went fine
02-28-2022 17:26:30 UTC : : CRSD Check went fine
02-28-2022 17:26:30 UTC : : CSSD Check went fine
02-28-2022 17:26:30 UTC : : EVMD Check went fine
02-28-2022 17:26:30 UTC : : Generating Responsefile for node addition
02-28-2022 17:26:30 UTC : : Clustered Nodes are set to racnode2:racnode2-vip:HUB
02-28-2022 17:26:30 UTC : : Running Cluster verification utility for new node racnode2 on racnode1
02-28-2022 17:26:30 UTC : : Nodes in the cluster racnode2
02-28-2022 17:26:30 UTC : : ssh to the node racnode1 and executing cvu checks on racnode2
02-28-2022 17:27:13 UTC : : Checking /tmp/cluvfy_check.txt if there is any failed check.
This software is "235" days old. It is a best practice to update the CRS home by downloading and applying the latest release update. Refer to MOS note 2731675.1 for more details.
Performing following verification checks ...
Physical Memory ...PASSED
Available Physical Memory ...PASSED
Swap Size ...FAILED (PRVF-7573)
Free Space: racnode2:/usr,racnode2:/var,racnode2:/etc,racnode2:/u01/app/21.3.0/grid,racnode2:/sbin,racnode2:/tmp ...PASSED
Free Space: racnode1:/usr,racnode1:/var,racnode1:/etc,racnode1:/u01/app/21.3.0/grid,racnode1:/sbin,racnode1:/tmp ...PASSED
User Existence: oracle ...
Users With Same UID: 54321 ...PASSED
User Existence: oracle ...PASSED
User Existence: grid ...
Users With Same UID: 54332 ...PASSED
User Existence: grid ...PASSED
User Existence: root ...
Users With Same UID: 0 ...PASSED
User Existence: root ...PASSED
Group Existence: asmadmin ...PASSED
Group Existence: asmoper ...PASSED
Group Existence: asmdba ...PASSED
Group Existence: oinstall ...PASSED
Group Membership: oinstall ...PASSED
Group Membership: asmdba ...PASSED
Group Membership: asmadmin ...PASSED
Group Membership: asmoper ...PASSED
Run Level ...PASSED
Hard Limit: maximum open file descriptors ...PASSED
Soft Limit: maximum open file descriptors ...PASSED
Hard Limit: maximum user processes ...PASSED
Soft Limit: maximum user processes ...PASSED
Soft Limit: maximum stack size ...PASSED
Architecture ...PASSED
OS Kernel Version ...PASSED
OS Kernel Parameter: semmsl ...PASSED
OS Kernel Parameter: semmns ...PASSED
OS Kernel Parameter: semopm ...PASSED
OS Kernel Parameter: semmni ...PASSED
OS Kernel Parameter: shmmax ...PASSED
OS Kernel Parameter: shmmni ...PASSED
OS Kernel Parameter: shmall ...PASSED
OS Kernel Parameter: file-max ...PASSED
OS Kernel Parameter: ip_local_port_range ...PASSED
OS Kernel Parameter: rmem_default ...PASSED
OS Kernel Parameter: rmem_max ...PASSED
OS Kernel Parameter: wmem_default ...PASSED
OS Kernel Parameter: wmem_max ...PASSED
OS Kernel Parameter: aio-max-nr ...FAILED (PRVH-0521)
OS Kernel Parameter: panic_on_oops ...PASSED
Package: kmod-20-21 (x86_64) ...PASSED
Package: kmod-libs-20-21 (x86_64) ...PASSED
Package: binutils-2.23.52.0.1 ...PASSED
Package: libgcc-4.8.2 (x86_64) ...PASSED
Package: libstdc++-4.8.2 (x86_64) ...PASSED
Package: sysstat-10.1.5 ...PASSED
Package: ksh ...PASSED
Package: make-3.82 ...PASSED
Package: glibc-2.17 (x86_64) ...PASSED
Package: glibc-devel-2.17 (x86_64) ...PASSED
Package: libaio-0.3.109 (x86_64) ...PASSED
Package: nfs-utils-1.2.3-15 ...PASSED
Package: smartmontools-6.2-4 ...PASSED
Package: net-tools-2.0-0.17 ...PASSED
Package: policycoreutils-2.5-17 ...PASSED
Package: policycoreutils-python-2.5-17 ...PASSED
Users With Same UID: 0 ...PASSED
Current Group ID ...PASSED
Root user consistency ...PASSED
Node Addition ...
CRS Integrity ...PASSED
Clusterware Version Consistency ...PASSED
'/u01/app/21.3.0/grid' ...PASSED
Node Addition ...PASSED
Host name ...PASSED
Node Connectivity ...
Hosts File ...PASSED
Check that maximum (MTU) size packet goes through subnet ...PASSED
subnet mask consistency for subnet "172.16.1.0" ...PASSED
subnet mask consistency for subnet "192.168.17.0" ...PASSED
Node Connectivity ...PASSED
Multicast or broadcast check ...PASSED
ASM Network ...PASSED
Device Checks for ASM ...
Package: cvuqdisk-1.0.10-1 ...PASSED
ASM device sharedness check ...
Shared Storage Accessibility:/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img ...PASSED
ASM device sharedness check ...PASSED
Access Control List check ...PASSED
Device Checks for ASM ...PASSED
Database home availability ...PASSED
OCR Integrity ...PASSED
Time zone consistency ...PASSED
User Not In Group "root": grid ...PASSED
Time offset between nodes ...PASSED
resolv.conf Integrity ...PASSED
DNS/NIS name service ...PASSED
User Equivalence ...PASSED
Software home: /u01/app/21.3.0/grid ...PASSED
/dev/shm mounted as temporary file system ...PASSED
zeroconf check ...PASSED
Pre-check for node addition was unsuccessful on all the nodes.
Failures were encountered during execution of CVU verification request "stage -pre nodeadd".
Swap Size ...FAILED
racnode2: PRVF-7573 : Sufficient swap size is not available on node "racnode2"
[Required = 16GB (1.6777216E7KB) ; Found = 0.0 bytes]
racnode1: PRVF-7573 : Sufficient swap size is not available on node "racnode1"
[Required = 16GB (1.6777216E7KB) ; Found = 0.0 bytes]
OS Kernel Parameter: aio-max-nr ...FAILED
racnode2: PRVH-0521 : OS kernel parameter "aio-max-nr" does not have expected
current value on node "racnode2" [Expected = "1048576" ; Current =
"65536";].
racnode1: PRVH-0521 : OS kernel parameter "aio-max-nr" does not have expected
current value on node "racnode1" [Expected = "1048576" ; Current =
"65536";].
CVU operation performed: stage -pre nodeadd
Date: Feb 28, 2022 5:26:31 PM
Clusterware version: 21.0.0.0.0
CVU home: /u01/app/21.3.0/grid
Grid home: /u01/app/21.3.0/grid
User: grid
Operating system: Linux5.4.17-2136.304.4.1.el7uek.x86_64
02-28-2022 17:27:13 UTC : : CVU Checks are ignored as IGNORE_CVU_CHECKS set to true. It is recommended to set IGNORE_CVU_CHECKS to false and meet all the cvu checks requirement. RAC installation might fail, if there are failed cvu checks.
02-28-2022 17:27:13 UTC : : Running Node Addition and cluvfy test for node racnode2
02-28-2022 17:27:13 UTC : : Copying /tmp/grid_addnode_21c.rsp on remote node racnode1
02-28-2022 17:27:13 UTC : : Running GridSetup.sh on racnode1 to add the node to existing cluster
02-28-2022 17:27:57 UTC : : Node Addition performed. removing Responsefile
02-28-2022 17:27:57 UTC : : Running root.sh on node racnode2
02-28-2022 17:27:57 UTC : : Nodes in the cluster racnode2
Failed to parse kernel command line, ignoring: No such file or directory
Failed to parse kernel command line, ignoring: No such file or directory
Failed to parse kernel command line, ignoring: No such file or directory
Failed to parse kernel command line, ignoring: No such file or directory
Failed to parse kernel command line, ignoring: No such file or directory
Failed to parse kernel command line, ignoring: No such file or directory
Failed to parse kernel command line, ignoring: No such file or directory
Failed to parse kernel command line, ignoring: No such file or directory
Failed to parse kernel command line, ignoring: No such file or directory
Failed to parse kernel command line, ignoring: No such file or directory
Failed to parse kernel command line, ignoring: No such file or directory
Failed to parse kernel command line, ignoring: No such file or directory
02-28-2022 17:41:21 UTC : : Checking Cluster
02-28-2022 17:41:21 UTC : : Cluster Check passed
02-28-2022 17:41:21 UTC : : Cluster Check went fine
02-28-2022 17:41:21 UTC : : CRSD Check failed!
02-28-2022 17:41:21 UTC : : Error has occurred in Grid Setup, Please verify!
^C
[root@vm-oracle dockerfiles]#
[root@vm-oracle dockerfiles]#
[root@vm-oracle dockerfiles]# docker exec -i -t racnode2 /bin/bash
[grid@racnode2 ~]$ tail -n 50 /tmp/orod.log
02-28-2022 17:26:29 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode2
02-28-2022 17:26:29 UTC : : Populate Rac Env Vars on Remote Hosts
02-28-2022 17:26:29 UTC : : Command : su - $GRID_USER -c "ssh $node sudo echo \"export ASM_DEVICE_LIST=${ASM_DEVICE_LIST}\" >> /etc/rac_env_vars" execute on racnode2
02-28-2022 17:26:29 UTC : : Changing Disk permission and ownership
02-28-2022 17:26:29 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chown $GRID_USER:asmadmin $device" execute on racnode2
02-28-2022 17:26:29 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode2
02-28-2022 17:26:29 UTC : : Populate Rac Env Vars on Remote Hosts
02-28-2022 17:26:29 UTC : : Command : su - $GRID_USER -c "ssh $node sudo echo \"export ASM_DEVICE_LIST=${ASM_DEVICE_LIST}\" >> /etc/rac_env_vars" execute on racnode2
02-28-2022 17:26:29 UTC : : Changing Disk permission and ownership
02-28-2022 17:26:29 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chown $GRID_USER:asmadmin $device" execute on racnode2
02-28-2022 17:26:29 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode2
02-28-2022 17:26:29 UTC : : Populate Rac Env Vars on Remote Hosts
02-28-2022 17:26:29 UTC : : Command : su - $GRID_USER -c "ssh $node sudo echo \"export ASM_DEVICE_LIST=${ASM_DEVICE_LIST}\" >> /etc/rac_env_vars" execute on racnode2
02-28-2022 17:26:29 UTC : : Checking Cluster Status on racnode1
02-28-2022 17:26:29 UTC : : Checking Cluster
02-28-2022 17:26:30 UTC : : Cluster Check on remote node passed
02-28-2022 17:26:30 UTC : : Cluster Check went fine
02-28-2022 17:26:30 UTC : : CRSD Check went fine
02-28-2022 17:26:30 UTC : : CSSD Check went fine
02-28-2022 17:26:30 UTC : : EVMD Check went fine
02-28-2022 17:26:30 UTC : : Generating Responsefile for node addition
02-28-2022 17:26:30 UTC : : Clustered Nodes are set to racnode2:racnode2-vip:HUB
02-28-2022 17:26:30 UTC : : Running Cluster verification utility for new node racnode2 on racnode1
02-28-2022 17:26:30 UTC : : Nodes in the cluster racnode2
02-28-2022 17:26:30 UTC : : ssh to the node racnode1 and executing cvu checks on racnode2
02-28-2022 17:27:13 UTC : : Checking /tmp/cluvfy_check.txt if there is any failed check.
02-28-2022 17:27:13 UTC : : CVU Checks are ignored as IGNORE_CVU_CHECKS set to true. It is recommended to set IGNORE_CVU_CHECKS to false and meet all the cvu checks requirement. RAC installation might fail, if there are failed cvu checks.
02-28-2022 17:27:13 UTC : : Running Node Addition and cluvfy test for node racnode2
02-28-2022 17:27:13 UTC : : Copying /tmp/grid_addnode_21c.rsp on remote node racnode1
02-28-2022 17:27:13 UTC : : Running GridSetup.sh on racnode1 to add the node to existing cluster
Launching Oracle Grid Infrastructure Setup Wizard...
As a root user, execute the following script(s):
1. /u01/app/21.3.0/grid/root.sh
Execute /u01/app/21.3.0/grid/root.sh on the following nodes:
[racnode2]
The scripts can be executed in parallel on all the nodes.
Successfully Setup Software.
02-28-2022 17:27:57 UTC : : Node Addition performed. removing Responsefile
02-28-2022 17:27:57 UTC : : Running root.sh on node racnode2
02-28-2022 17:27:57 UTC : : Nodes in the cluster racnode2
02-28-2022 17:41:21 UTC : : Checking Cluster
02-28-2022 17:41:21 UTC : : Cluster Check passed
02-28-2022 17:41:21 UTC : : Cluster Check went fine
02-28-2022 17:41:21 UTC : : CRSD Check failed!
02-28-2022 17:41:21 UTC : : Error has occurred in Grid Setup, Please verify!
Checking crsctl:
[grid@racnode2 debug]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
CRS-4534: Cannot communicate with Event Manager
[grid@racnode2 debug]$ $GRID_HOME/bin/crsctl check cluster
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
CRS-4534: Cannot communicate with Event Manager
crsctl stat res -t -init
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
1 ONLINE OFFLINE STABLE
ora.cluster_interconnect.haip
1 ONLINE OFFLINE STABLE
ora.crf
1 ONLINE ONLINE racnode2 STABLE
ora.crsd
1 ONLINE OFFLINE STABLE
ora.cssd
1 ONLINE OFFLINE STABLE
ora.cssdmonitor
1 ONLINE ONLINE racnode2 STABLE
ora.ctssd
1 ONLINE OFFLINE STABLE
ora.diskmon
1 OFFLINE OFFLINE STABLE
ora.evmd
1 ONLINE INTERMEDIATE racnode2 STABLE
ora.gipcd
1 ONLINE ONLINE racnode2 STABLE
ora.gpnpd
1 ONLINE ONLINE racnode2 STABLE
ora.mdnsd
1 ONLINE ONLINE racnode2 STABLE
ora.storage
1 ONLINE OFFLINE STABLE
--------------------------------------------------------------------------------
[grid@racnode2 debug]$ crsctl start res ora.crsd -init
CRS-2672: Attempting to start 'ora.cssd' on 'racnode2'
CRS-2672: Attempting to start 'ora.diskmon' on 'racnode2'
CRS-2676: Start of 'ora.diskmon' on 'racnode2' succeeded
CRS-1609: This node is unable to communicate with other nodes in the cluster and is going down to preserve cluster integrity; details at (:CSSNM00086:) in /u01/app/grid/diag/crs/racnode2/crs/trace/onmd.trc.
CRS-2674: Start of 'ora.cssd' on 'racnode2' failed
CRS-2679: Attempting to clean 'ora.cssd' on 'racnode2'
CRS-2681: Clean of 'ora.cssd' on 'racnode2' succeeded
CRS-4000: Command Start failed, or completed with errors.
Here is some more information from the trace:
tail -n 50 /u01/app/grid/diag/crs/racnode2/crs/trace/onmd.trc
2022-02-28 18:02:16.118 : ONMD:140039320811264: [ INFO] clssscCheckInitCmpl: Initialization state clssscInitGNS_READY (0x00040000) set
2022-02-28 18:02:16.118 : ONMD:140039320811264: [ INFO] clssscCheckInitCmpl: Initialization state clssscInitHAVE_ICIN (0x00200000) not set
2022-02-28 18:02:16.118 : ONMD:140039320811264: [ INFO] clssscCheckInitCmpl: Initialization state clssscInitACTTHRD_DONE (0x00800000) set
2022-02-28 18:02:16.118 : ONMD:140039320811264: [ INFO] clssscCheckInitCmpl: Initialization state clssscInitOPENBUSS (0x01000000) not set
2022-02-28 18:02:16.118 : ONMD:140039320811264: [ INFO] clssscCheckInitCmpl: Initialization state clssscInitBCCM_COMPL (0x02000000) set
2022-02-28 18:02:16.118 : ONMD:140039320811264: [ INFO] clssscCheckInitCmpl: Initialization state clssscInitCOMPLETE (0x20000000) set
2022-02-28 18:02:16.118 : ONMD:140039320811264: [ INFO] Initialization not complete !Error!
2022-02-28 18:02:16.118 : ONMD:140039320811264: [ INFO] #### End diagnostic data for the Core layer ####
2022-02-28 18:02:16.118 : ONMD:140039320811264: [ INFO] ### Begin diagnostic data for the GM Peer layer ###
2022-02-28 18:02:16.118 : ONMD:140039320811264: [ INFO] GMP Status: State CMStateINIT, incarnation 0, holding incoming requests 0
2022-02-28 18:02:16.118 : ONMD:140039320811264: [ INFO] Status for active hub node racnode2, number 2:
2022-02-28 18:02:16.118 : ONMD:140039320811264: [ INFO] Connect: Started 1 completed 1 Ready 1 Fully Connected 0 !Error!
2022-02-28 18:02:16.118 : ONMD:140039320811264: [ INFO] #### End diagnostic data for the GM Peer layer ####
2022-02-28 18:02:16.118 : ONMD:140039320811264: [ INFO] ### Begin diagnostic data for the NM layer ###
2022-02-28 18:02:16.118 : ONMD:140039320811264: [ INFO] Local node racnode2, number 2, state is clssnmNodeStateJOINING
2022-02-28 18:02:16.118 : ONMD:140039320811264: [ INFO] Status for node racnode1, number 1, uniqueness 1646067289, node ID 0
2022-02-28 18:02:16.118 : ONMD:140039320811264: [ INFO] State clssnmNodeStateINACTIVE, Connect: started 1 completed 0 OK
2022-02-28 18:02:16.118 : ONMD:140039320811264: [ INFO] Status for node racnode2, number 2, uniqueness 1646070855, node ID 0
2022-02-28 18:02:16.118 : ONMD:140039320811264: [ INFO] State clssnmNodeStateJOINING, Connect: started 1 completed 1 OK
2022-02-28 18:02:16.118 : ONMD:140039320811264: [ INFO] #### End diagnostic data for the NM layer ####
2022-02-28 18:02:16.118 : ONMD:140039320811264: [ INFO] ######## End Diagnostic Dump ########
2022-02-28 18:02:16.119 : ONMD:140039320811264: [ INFO] clsscssdcmExit: Status: 4, Abort flag: 0, Core flag: 0, Don't abort: 0, flag: 112
2022-02-28 18:02:16.119 : ONMD:140039320811264: scls_dump_stack_all_threads - entry
2022-02-28 18:02:16.119 : ONMD:140039320811264: scls_dump_stack_all_threads - stat of /usr/bin/gdb failed with errno 2
2022-02-28 18:02:16.119 : ONMD:140039320811264: [ INFO] clsscssdcmExit: Now aborting
CLSB:140039320811264: [ ERROR] Oracle Clusterware infrastructure error in ONMD (OS PID 31069): Fatal signal 6 has occurred in program onmd thread 140039320811264; nested signal count is 1
Trace file /u01/app/grid/diag/crs/racnode2/crs/trace/onmd.trc
Oracle Database 21c Clusterware Release 21.0.0.0.0 - Production
Version 21.3.0.0.0 Copyright 1996, 2021 Oracle. All rights reserved.
DDE: Flood control is not active
2022-02-28T18:02:16.136201+00:00
Incident 17 created, dump file: /u01/app/grid/diag/crs/racnode2/crs/incident/incdir_17/onmd_i17.trc
CRS-8503 [] [] [] [] [] [] [] [] [] [] [] []
2022-02-28 18:02:16.499 : ONMD:140038136391424: [ INFO] clssnmvDHBValidateNCopy: node 1, racnode1, has a disk HB, but no network HB, DHB has rcfg 541533305, wrtcnt, 4047, LATS 9721354, lastSeqNo 4046, uniqueness 1646067289, timestamp 1646071336/9721084
2022-02-28 18:02:16.514 :GIPCGMOD:140039297148672: [ INFO] gipcmodGipcCallbackEndpClosed: [gipc] Endpoint close for endp 0x7f5d040399b0 [0000000000008e40] { gipcEndpoint : localAddr 'gipcha://racnode2:ef23-2c4e-e14a-209e', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/0095-cc7b-01e4-4be7', numPend 0, numReady 1, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0x563dd06c1e20, ready 1, wobj 0x7f5d0403c6e0, sendp (nil) status 0flags 0x2003860e, flags-2 0x50, usrFlags 0x0 }
2022-02-28 18:02:16.514 :GIPCHDEM:140039295571712: [ INFO] gipchaDaemonProcessClientReq: processing req 0x7f5d300fc560 type gipchaClientReqTypeDeleteName (12)
2022-02-28 18:02:16.514 :GIPCGMOD:140038128506624: [ INFO] gipcmodGipcCompleteRequest: [gipc] completing req 0x7f5d04050db0 [0000000000008ea3] { gipcReceiveRequest : peerName '', data (nil), len 0, olen 0, off 0, parentEndp 0x7f5d040399b0, ret gipcretConnectionLost (12), objFlags 0x0, reqFlags 0x2 }
2022-02-28 18:02:16.514 :GIPCGMOD:140038128506624: [ INFO] gipcmodGipcCompleteRecv: [gipc] Completed recv for req 0x7f5d04050db0 [0000000000008ea3] { gipcReceiveRequest : peerName '', data (nil), len 0, olen 0, off 0, parentEndp 0x7f5d040399b0, ret gipcretConnectionLost (12), objFlags 0x0, reqFlags 0x2 }
2022-02-28 18:02:16.514 : ONMD:140038128506624: [ INFO] clssnmeventhndlr: Disconnecting endp 0x8e40 ninf 0x563dd06c3250
2022-02-28 18:02:16.514 : ONMD:140038128506624: [ INFO] clssnmDiscHelper: racnode1, node(1) connection failed, endp (0x8e40), probe(0x7f5d00000000), ninf->endp 0x7f5d00008e40
2022-02-28 18:02:16.514 : ONMD:140038128506624: [ INFO] clssnmDiscHelper: node 1 clean up, endp (0x8e40), init state 0, cur state 0
2022-02-28 18:02:16.514 :GIPCXCPT:140038128506624: [ INFO] gipcInternalDissociate: obj 0x7f5d040399b0 [0000000000008e40] { gipcEndpoint : localAddr 'gipcha://racnode2:ef23-2c4e-e14a-209e', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/0095-cc7b-01e4-4be7', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef (nil), ready 1, wobj 0x7f5d0403c6e0, sendp (nil) status 0flags 0x2003860e, flags-2 0x50, usrFlags 0x0 } not associated with any container, ret gipcretFail (1)
2022-02-28 18:02:16.514 :GIPCXCPT:140038128506624: [ INFO] gipcDissociateF [clssnmDiscHelper : clssnm.c : 4488]: EXCEPTION[ ret gipcretFail (1) ] failed to dissociate obj 0x7f5d040399b0 [0000000000008e40] { gipcEndpoint : localAddr 'gipcha://racnode2:ef23-2c4e-e14a-209e', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/0095-cc7b-01e4-4be7', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef (nil), ready 1, wobj 0x7f5d0403c6e0, sendp (nil) status 0flags 0x2003860e, flags-2 0x50, usrFlags 0x0 }, flags 0x0
2022-02-28 18:02:16.514 :GIPCXCPT:140038128506624: [ INFO] gipcInternalDissociate: obj 0x7f5d040399b0 [0000000000008e40] { gipcEndpoint : localAddr 'gipcha://racnode2:ef23-2c4e-e14a-209e', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/0095-cc7b-01e4-4be7', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef (nil), ready 1, wobj 0x7f5d0403c6e0, sendp (nil) status 0flags 0x2003860e, flags-2 0x50, usrFlags 0x0 } not associated with any container, ret gipcretFail (1)
2022-02-28 18:02:16.514 :GIPCXCPT:140038128506624: [ INFO] gipcDissociateF [clssnmDiscHelper : clssnm.c : 4645]: EXCEPTION[ ret gipcretFail (1) ] failed to dissociate obj 0x7f5d040399b0 [0000000000008e40] { gipcEndpoint : localAddr 'gipcha://racnode2:ef23-2c4e-e14a-209e', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/0095-cc7b-01e4-4be7', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef (nil), ready 1, wobj 0x7f5d0403c6e0, sendp (nil) status 0flags 0x2003860e, flags-2 0x50, usrFlags 0x0 }, flags 0x0
2022-02-28 18:02:16.515 : ONMD:140038128506624: [ INFO] clssscSelect: gipcwait returned with status gipcretPosted (17)
2022-02-28 18:02:16.515 : GIPCTLS:140038128506624: [ INFO] gipcmodTlsDisconnect: [tls] disconnect issued on endp 0x7f5d040399b0 [0000000000008e40] { gipcEndpoint : localAddr 'gipcha://racnode2:ef23-2c4e-e14a-209e', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/0095-cc7b-01e4-4be7', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0x563dd06a18a0, ready 1, wobj 0x7f5d0403c6e0, sendp (nil) status 0flags 0x2603860e, flags-2 0x50, usrFlags 0x0 }
2022-02-28 18:02:16.515 : ONMD:140038128506624: [ INFO] clssnmDiscEndp: gipcDestroy 0x8e40
It looks like the racnode2 can't communicate with racnode1. Do I need to specify the connection manager when creating racnode2 container?
-e CMAN_HOSTNAME=racnode-cman1 \
-e CMAN_IP=172.16.1.15 \
Many thanks!