I haven't tried a "sleep 45" in rc.local, but I am sure it, or something within 120 seconds would work. I can ssh into the server just as the daemon comes up and do "mount -a", as it works fine by then. I'd hate to put another delay in the boot, as it already has to time-out 2x for QLogic boards (2 minutes!) since Ubuntu's kernel package refuses to load add firmware to initrd (and I refuse to tinker it when it could easily be "fixed" with auto loading firmware as detected, we all have decent/large size /boot fs anyway). https://bugs.launchpad.net/ubuntu/+source/udev/+bug/328550
Anyway, I know the network is running fine, as the /etc/hosts hack works around this bug. The problem appears to be with mountnfs.sh not knowing DNS isn't up, or something along that line. Kinda reminds me of the NFS automount issue I had in v6.06 (only first NFS in fstab would mount at boot, it ignored all others).
This is just one server connecting to a single NAS NFS share. I have 18 servers using 4 NAS servers and one Ubuntu file server. It is going to get real ugly if I have to hack all of them. Ugly as in, don't move anything to new IPs..
I'd call Canonical for support, but we dropped it last year.
I haven't tried a "sleep 45" in rc.local, but I am sure it, or something within 120 seconds would work. I can ssh into the server just as the daemon comes up and do "mount -a", as it works fine by then. I'd hate to put another delay in the boot, as it already has to time-out 2x for QLogic boards (2 minutes!) since Ubuntu's kernel package refuses to load add firmware to initrd (and I refuse to tinker it when it could easily be "fixed" with auto loading firmware as detected, we all have decent/large size /boot fs anyway). /bugs.launchpad .net/ubuntu/ +source/ udev/+bug/ 328550
https:/
Anyway, I know the network is running fine, as the /etc/hosts hack works around this bug. The problem appears to be with mountnfs.sh not knowing DNS isn't up, or something along that line. Kinda reminds me of the NFS automount issue I had in v6.06 (only first NFS in fstab would mount at boot, it ignored all others).
This is just one server connecting to a single NAS NFS share. I have 18 servers using 4 NAS servers and one Ubuntu file server. It is going to get real ugly if I have to hack all of them. Ugly as in, don't move anything to new IPs..
I'd call Canonical for support, but we dropped it last year.