Neural Radiance Fields are a technique from the neural rendering research field, while photogrammetry is a research field on its own. However these are just turf wars and in practice there is a lot of overlap between both fields.
For example, most NeRF implementations recommend the use of COLMAP (traditionally a photogrammetry tool) to obtain camera positions/rotations that are used alongside their images. So this multi-view stereo step is shared between both NeRF (except a few research works that also optimize for camera positions/rotations through a neural network) and photogrammetry.
After the multi-view stereo step in NeRF you train a neural renderer, while in photogrammetry you would run a multi-view geometry step/package that uses more traditional optimization algorithms.
The expected output of both techniques is slightly different. NeRF produces renderings and can optionally export a mesh (using the marching cubes algorithm). Photogrammetry produces meshes and in the process might render the scene for editting purposes.
Why would you prefer NeRF to photogrammetry? Or vice versa?