Category: Network
NFS stays for Network File System.
The available manuals are sufficient.
If it does not work after following the instructions, it may help
to restart the portmapper on the client side.
some hints
On the Server side:
- Modify the /etc/exports
- #exportfs -r to update
On the client side
- change the /etc/fstab
- mount
Display counfiguration via
- #showmount --exports /--all host
- hfsstat -s/-c
Observations
Speed
For a file of 1GB: Using
dd if=NFS_Directory/File of=/dev/null only the Server's Hdd and the networkspeed are relevant. Copying over the same file multiple times, one can observe the following:
- The first time the speed is about 60MB/s. That is approximately the Hdd's speed.
- The next run will give about 120MB/s. That is 1GB/s , the linespeed.
- Further runs result in about 400MB/s (node34) or 800Mb/s (node38). That is equivalent to Ramdisk --> /dev/null.
It seems that there are two levels of buffering. After the first transmission the File remains in the Server's buffer and is transfered from there the next time. The third copy seems to happen on the node itself.
(NFS3 was used,rsize,wsize unset --> default, mtu=1500)
bonding with VLANs
In a
bond setup with nodes with two interfaces, mtu=1500, rsiez,wsize at default the speed increases nearly linearly:
- Transfer from Hdd about 45-55MB/s a little less than without bonding, mabe because of asymmetric reads on the Hdd due to rerequests of packets lost to out-of-order-delivery.
- Transfer from Ramdisk or Server-cache: 190 to 230 MB/s. Maximum speed at optimized TCP-settings 245MB/s
- Cached on the client side: Same as above (400 or 800MB/s). Improved dd blocksize results in up to 2.5GB/s.
Further tests an experiments with r/wsize, ringbuffer , ... will increase the performance. Visit
TCP-Optimization and
VLAN-Workaround.
Tests to be made:
- Performance TCP vs. UDP
- Performance with Jumbo frames
- wsize/rsize settings
- NFS3 vs. NFS4 (--> NFS4 is not yet stable on linux)
- How does out-of-order-delivery affect Hdd-read-speed
- ...