Those kinds of use cases are what Git LFS [1] was created to help with. The major Git vendors support it (Bitbucket, GitHub, GitLab). I'm not as certain about the open source implementations.
Man, had to really dig to find that. Ctrl-F "lock" didn't hit anything on the landing page or main docs page.
Even then it looks like an implementation that misses the mark. Both Perforce and SVN mark "lockable" files as Read-Only on a sync of the local copy. It looks like all this does is prevent a push of the file which will be the same result as if someone merged it ahead of you.
Without that guard in place it makes it really easy to make a mistake and then have changes to a binary file that will collide. Having a checkout process as part of locking workflow is pretty much mandatory.
[edit]
Nevermind, I finally found this[1] that explains Git LFS does it correctly, kudos! Could do with some more detailed documentation since it looks partially implemented from the man pages.
I dabbled a bit with lfs but it seems specifically geared towards a few very large resources that you need to manually take care of syncing etc.
What we need to move off large repos in tfvc or svn is something where the inconvenience is small enough that it doesn’t weigh heavier than the benefits of git. LFS was not that, this could be.
I made a stupid decision to use git-lfs to track a pre-initialized Docker database image (using LFS to store initial_data.sql.bz2). Now I have to live with few dozen gigabytes of the past I don't really need (it's a small database).
I know I can rewrite the history and then find something to garbage collect unreferenced blobs (haven't investigated it). The repo allows doing so as it only keeps those dumps, the Dockerfile and a pair of small shell scripts. And, most importantly, it has only 2 users - me and my CI. If it would be large assets and lots of code, checked out and worked on by a large team - I guess, rewriting the history isn't an option.
[1] https://git-lfs.github.com/