Not necessarily. One can capture 10G to disk using "only" a RAID-0 with 8-10 mechanical disks: it does the job both in bandwidth and space, and you can use regular filesystems such as XFS.
40G is a little bit more difficult: you need a huge RAID (simple, direct scaling: 32-40 disks) with mechanical disks to achieve the necessary bandwidth, and if you want to use SSDs you will need a lot of them too in order to have enough space to save any meaningful amount of traffic.
I remember seeing papers on on-the-fly compression for network traffic, but IIRC the results were not very impressive and the performance cost was noticeable.
> but how do you then get that many packets to disk
it may be possible to do disk i/o at that high rate e.g. with pci-e or a dedicated appliance for dumping the entire stream. but you would running out of storage pretty fast.
for example, a quick back-of-the-envelope calculation, where you dump packet stream from 4x10gbps cards with minimal 84b size (on ethernet), show that you would exhaust the storage in approx. 4.5 minutes :)
> While I don't know the exact overhead of 10GigE, there is likely still some overhead.
on 10gige pipes, at max Ethernet mtu (1500) bytes etc, there is approx. 94% of available bandwidth for user data (accounting for things like inter-frame-gap, crc checksums etc). with jumbo-frames that number goes to 99%.
Presumably you need flash drives, and probably an append-only filesystem?