2006-02-06 Network speed. Between "serv.ntmm.org" (the 2002-vintage [Gus] AthlonXP-2000 running slow at ~1200MHz, with original mboard, realtek pci netcard, r8169 driver) and "box.ntmm.org" (new amd64 system, with on-board [pci-x] Marvell gigabit card) through the cheap netgear 8-port gigabit switch. From /dev/shm a 196MB file (from /dev/urandom) was copied by ftp to serv's /dev/shm and back again. serv -> box 204800000 bytes received in 6.7 secs (3e+04 Kbytes/sec) box -> serv 204800000 bytes sent in 4.81 secs (4.2e+04 Kbytes/sec) Using http from serv -> box gave practically exactly the same result: serv -> box 204,800,000 30.55M/s ================== Now (note that /dev/sda (/home/public/) is a SATA disk on a SATA controller on the same PCI bus, and that /home is /dev/md0 -- software RAID of /dev/hd[bc]1 ) a few file-copies of real disks are done, first locally then over the network. cp ~Maildir/ /dev/shm Maildir is 134MB, with 10688 files and 466 dirs. It had not recently been accessed. This took 9.6s first time, and 2.6s the next after caching. So, about 14MB/s to read these small files from the RAID. cp /dev/shm/Maildir MD2 ; sync 8.9s (but this was after a first go when the time wasn't recorded: could this repitition have a large effect on the time?) cp /home/public/video/..../file.avi /dev/shm/ A single file, 234MB, in 4.9s: 47.8MB/s Same, over network: 10.8s: 21.7MB/s cp /srv/portage/distfiles/linux* /dev/shm/ Ten files: total of 217MB, in 4.13s: 52.5MB/s Same, over network: 10.7s: 20.3MB/s cp -a /srv/portage/{app,dev,net}* /dev/shm Total of 204MB in 49005 files and 12095 dirs, in 17.2s, and the same time (17.1s) on repeat, suggesting that syscalls to make files in the shm are much of the work! Note that a du -s on all the files was run before the first copy, so metadata was cached. Took huge time (minutes) on network: 120s for just 57M of ..../net* ! cp -a /srv/portage/{app,dev,net}* /tmp/ Similar to the above -- ~16.2s -- was interested in whether tmpfs has any special difference from shm (or is that just tmpfs too?). So far: network i/o speed (including effects of internal connection, i.e. pci bus) certainly is below the disk i/o speed, and there is expected to be interference between the sata and net controllers sharing the slow pci bus (to be tested in net tests).