I'm not sure, I haven't been actively involved in development for a couple years.
There were a couple distinct activities. One is the rolling utxo hashes, which has no major engineering hurdles, and can allow a compromised security "bootstrap from a utxo set".
The other are schemes that allow nodes to not have the utxo set but still validate-- these have historically had unfavorable IO costs, and the bandwidth storage tradeoff hasn't seemed that appealing-- e.g. would you find reducing storage from 10 GB to 1MB but at a cost for increasing bandwidth 10x to be appealing? In some applications it would be, not others.
I believe work related to both has been ongoing, however.
> compromised security "bootstrap from a utxo set"
could you elaborate what's compromised about including a sha256 of utxo set in every block and allowing users to choose how far back they want to bootstrap from?
isn't it strictly better than current situation with assumevalid?
Depending on a utxo state in blocks is effectively the SPV security model, -- it's an utter blind trust in miners to set the value honestly. Something which is only theoretically sound on the assumption that someone else is checking.
If you're happy with the spv security model-- perhaps you should be using SPV? :) This is a little trite I know, because it's not quite identical because of the "past": but the vast majority of the sync time is in the last two years in any case, and practical considerations mean you wouldn't be able to just arbitrarily choose how far to sync from (as you need to be able to get the utxo set as of that height).
In the ethereum world effectively almost all synchronization is done using 'fast sync' which is essentially the committed utxo blindly trust miners model. Performance and storage considerations mean you can't go back more than a tiny amount of time (I believe its normally 4 hours). Many commercial entities operate with multiple nodes and if they detect they've fallen behind they just auto-restart and fast sync to catch back up. Effectively this means that if miners commit invalid state they'll just blindly accept it after a couple hours outage.
All assumevalid is doing is asserting that the ancestors two weeks back and further of a specific block hash all have valid signatures. When you get a setting there as part of the software you're running you're assuming that the software isn't backdoored (e.g. because of a public review process, or your own review). Assumevalid is strictly easier to review than pretty much any other aspect of the software integrity. E.g. there are 100 places where a one character change would silently bypass validation completely. Reviewing AV simply requires checking that the value set in it is an accepted block in some existing running node. AV as implemented also requires the blockchain to agree and have two weeks of work ontop of it, so it's just in every way harder to undermine validation by messing with it than changing the code some other way.
On a technically pedantic point. It takes a minute or so to sha256 the UTXO set, so doing literally what you suggest would utterly obliterate validation performance. (fortunately rolling hashes accomplish what you mean without the huge performance hit.)
> Depending on a utxo state in blocks is effectively the SPV security model, -- it's an utter blind trust in miners to set the value honestly
if it was hardforked in as part of consensus protocol - miner's wouldn't be able to set invalid utxo set hash any more than they are able to "produce" blocks with invalid signatures, or am i missing something?
as for storage and performance, maybe it would make sense to take the performance hit of maintaining a persistent immutable set such that you would be able to travel back as far as you like with minimal overhead.
do you know of any active PRs/branches where utxo commitment work is/has been happening?
They can produce blocks with invalid signatures, but they're stopped by nodes validating. But if instead of validation nodes skip blocks and use a commitment to the state, then they're not validating anymore. How that fails is why I gave the ethereum example, because I think the security has actually practically failed there-- just not been exploited yet.
> as for storage and performance, maybe it would make sense to take the performance hit of maintaining a persistent immutable set such that you would be able to travel back as far as you like with minimal overhead.
The cost of supporting that arbitrarily would be extremely high over and above the cost of having the complete blockchain. I don't see why anyone would choose to run a node to serve that. I certainly wouldn't-- it's obnoxious enough just to have an archive node. But having some periodic snapshots would probably be fine ... but not that many since each would be on the order of 7GB of additional storage.
No, there is work ongoing I haven't been following closely. Sounds like you're more interested in the assumeutxo style usage, so search for that and muhash.