I see them crop up everywhere. IMO, they are decidedly human-unfriendly - particularly to programmers and database admins trying to debug issues. Too many digits to deal with, and they suck up too much column width in query results, spreadsheets, reports, etc.
I'm not saying they don't have a place (e.g. when you have a genuine need to generate unique identifiers across completely disconnected locations, and the id's will generally never need to be dealt with by a human). But in practice they've been abused to do everything under the sun (filenames, URL links, user id's, transaction numbers, database primary keys, etc). I almost want to start a website with a gallery of all the examples where they've been unsuitably shoehorned in when just a little more consideration would have produced something more humane.
For most common purposes, a conventional, centralized dispenser is better. Akin to the Take-A-Number reels you see at the deli. Deterministic randomization is a thing if you don't want the numbers to count sequentially. Prefixes, or sharding the ID space, is also a thing, if you need uniqueness across different latency boundaries (like disparate datacenters or siloed servers).
I've lost count of how many times I've seen a UUID generated when what the designer really should have done is just grab the primary key (or when that's awkward, the result of a GetNextId stored procedure) from their database.
At a prior job, there was an internal project code system for tracking billable hours or people assignment kind of thing. Everyone knew the codes of their projects. It was a six digit code, two letters and then four numbers: giving you some ~7 million point space. Company was ~100 years old and only had some 15k codes recorded in all history. The list of codes was manually updated once a quarter by an admin who might add another ten at a time.
Some chuckle head decided to replace the system with UUIDs. Now, they are no longer human memorable/readable/writable on paper/anything useful. Even better, they must have used some home grown implementation, because the codes were weirdly sequential. If you ever had to look at a dump of codes, the ids are almost identical minus a digit somewhere in the middle.
Destructive change that ruined a perfectly functional system.
It's funny how fast it is to just implement a counter and how much people rely on UUIDs to avoid it. If you already use postgres somewhere, just create a "counter" table for your namespace. You can easily count 10K-100k values per second or faster, with room to grow if you outscale that.
What do you get? The most efficient, compressible little integers you could ever want. You unlock data structures like roaring bitmaps/ treemaps. You cut memory to 25% depending on your cardinality (ie: you can use u16 or u32 in memory sometimes). You get insane compression benefits where you can get rows of these integers to take a few bits of data each after compression. You get faster hashmap lookups. It's just insane how this compounds into crazy downstream wins.
It is absolutely insane how little cost it is to do this and how many optimizations you unlock. But people somehow think that id generation will be their bottleneck, or maybe it's just easier to avoid a DB sometimes, or whatever, and so we see UUIDs everywhere. Although, agreed that most of the time you can just generate the unique id for data yourself.
In fairness, UUID is easier, but damn it wrecks performance.
“Which row was it, ‘basketball fish’ or ‘cake potato’?
Of course, the words would need to be a checksum. As soon as you introduce them, nobody is looking at the hex again. Which is an improvement, since nobody is looking at all the hex now “it’s the one ending in ‘4ab’”.
There's been a lot of historical work done in the past and I used NIST FIPS181 to implement this.
Note: FIPS181 was intended for passwords and I was using them as handy short human-readable record IDs as per your post. You probably shouldn't use FIPS181 for passwords in 2026 LOL.
Describing FIPS181 as pronounceable is optimistic. However its better than random text wrt human conversations. They start looking like mysterious assembly language mnemonics after awhile.
C# is supported! It goes through sem-core's(the underlying library for parsing we use in Weave) tree-sitter-c-sharp plugin. Classes, methods, interfaces, enums, structs are all extracted with it. Let me know if you hit anything.
Cool! I didn't see it listed on the main page so that's why I asked. Are there a lot of languages similarly supported via plugins? Are they all listed somewhere?
Edit: Also, how are comments treated, in general (especially if they exist outside the structures you mentioned)? Eg. Does it somehow surface "contradictory" / conflicting edits made within comments? Or are they totally ignored?
For comments: weave bundles doc comments (JSDoc, ///, /* */, etc.) with the entity they belong to. So if one branch edits a doc comment and another edits the function body, they merge as part of the same entity. Standalone comments between functions are treated as interstitial text and merge normally.
Thanks for notifying, I will update it. So we use a language specific parser called sem here:https://ataraxy-labs.github.io/sem/, you can use all the languages listed here in weave, its a seperate library. So if you want to add language support you can open a PR here.
I probably would have settled for a button, but the gesture is so nice. (There's also a "twist" gesture to activate the camera. It's also nice. I haven't ever gotten false positives with the gestures and they have become second nature. I never used any gestures before having a Moto phone and always thought accelerometer-based input was gimmicky and unreliable based on my experience with prior phones.)
Without criticizing or implying any conspiracy theories, I did find it odd where the news release quoted "a spokesperson at GrapheneOS" without attributing it.
We badly need alternative(s) like GrapheneOS, and I want to see it succeed. I hope as the project matures, the sense of professionalism and stability it projects will strengthen. For what it's worth, I personally feel the business partnership is a step toward that end, and am really happy to see some manufacturer diversity.
The statement was put together collaboratively by our COO, community manager and moderation team which is the case with a lot of our written responses to journalists and others. People with their real name tied to GrapheneOS are targeted with conspiracy theorizing and harassment. You can see it throughout this Hacker News thread. People personally target myself and other members of the team in very vile ways.
Is it possible to disable passkey support in Chromium and have it tell websites that feature is unsupported? So you no longer get prompted to create them? (On a global or per-site basis)
I don't want any cloud service to be my passkey provider. I'm not comfortable establishing that kind of dependence on a company I don't trust.
I'd be content to keep passkeys in a datastore that I control, and which I can inspect and manage on my own (including backup, restore, and ideally even daily migration to a "hotspare" device).
As more websites adopt passkeys, I find myself continually being nagged to adopt them. Although their intent is benign, the companies use dark UI patterns like not respecting my choice after I've said NO (there isn't a "Never" option, just a "Skip for now"), and they continually shove the unwanted reminders in my face every time I login.
My concern is eventually I'll accidentally hit the wrong button and create an unwanted passkey that's now tied to my machine/cloud account/vendor service in a way I don't control. I'm fact, I'd bet product managers are counting on that manipulation (nag the user until they concede) to brag about adoption rates and get raises.
My browser is supposed to me my agent, so I think it's a valid question - and related to this topic - to ask if there's a way to turn off the unwanted functionality.
You can use any credential manager you choose. It is an open ecosystem. If you don't want to use a cloud service, don't. You can self-host many credential managers. There are also many solutions that just use a local database.
We demonstrated simultaneous reception of neighboring channels with strong isolation between them." This enabled the researchers to monitor numerous radio channels at once, instead of tuning into them individually.
Can anyone elaborate on this? How does a single receiver produce multiple concurrent outputs, and how are they isolated in this context?
Because all of the signals are superimposed. So if your receiver isn't selective it will show all of them at once and if you then demodulate selective parts of the spectrum by filtering you can isolate the signals individually.
Think of any antenna: it is just a rod or a coil, it may have a specific frequency that it particularly likes because that is a nice fraction of its wavelength or close to its own resonance frequency, but that doesn't mean it isn't going to receive all the other signals to greater or lesser extent as well. The ratio between that one that it likes and the rest is called selectivity. The lower the selectivity the more evenly you will receive all signals at the same time.
Usually receivers have a tuned front-end to get as much of the signal you want and to repress the rest as much as possible but that is optional, you can have a wideband front end just the same.
If you're just looking for the modern explanation of how an SDR works: an SDR just measures electrical current flow over time. If you graph current over time as detected it usually just looks like a bunch of signals all superimposed upon each other. A signal like "f = asin(x) + bsin(y) + c*sin(z)" if all the transmitters were just transmitting pure carriers with no modulation.
Somewhere in the SDR, electrical current flow becomes a number using analog to digital conversion on the order of 7-12 bits in most SDRs. At that point you apply something like a FFT and then the time domain signal becomes a frequency domain signal, which can separate out different carriers.
Unlike conventional cars that require expensive safety systems such as air bags and seat belts, the mover3000's top speed of one mile per hour makes it intrinsically safe.
The Ribbon is a disaster. Compared to conventional toolbars, it fails across several metrics.
When it first came out, I did studies of myself using it vs. the older toolbared versions of Word and Excel, and found I was quantifiably slower. This was after spending enough time to familiarize myself with it and get over any learning curve.
EFFICIENCY
The biggest problem is it introduced more clicks to get things done - in some cases twice as many or more. Having to "tab" to the correct ribbon pane introduces an extra click for every task that used to be one click away, unless the button happens to be on the same tab. Unfortunately the grouping wasn't as well thought out as it could have been. It was designed with a strong bias for "discoverability" over efficiency, and I found with many repetitive tasks that I commonly carried out, I was constantly having to switch back and forth between tabs. That doesn't even get into the extra clicks required for fancier elements like dropdowns, etc. And certain panes they couldn't figure out where to put are clearly "bolted" on.
KEYBOARD SHORTCUTS
At the same time, Microsoft de-emphasized keyboard accelerators. So where the old toolbar used to hint you the keyboard shortcut in a tooltip every time you rested your mouse over a button, the new one doesn't - making it unlikely users will ever learn the powerful key combos that enable more rapid interaction and reduce RSI caused by mousing (repetitive strain injury). In my case this manifests as physical pain, so I'm very aware of wasteful gestures.
SCREEN REAL ESTATE
The amount of text in the button captions on the ribbon is also excessive. It really isn't a toolbar at all, more of a fancy dropdown menu that's been pivoted horizontally instead of vertical. It turned the menu bar, which used to be a nice, compact, single line, into something that now takes up ~4x as much vertical screen real estate. As most users' monitors are in landscape orientation, vertical space is scare to start with; congratulations you just wasted more of those precious pixels, robbing me of space to look at what I really care about which is the document or whatever thing I'm actually working on.
DISCOVERABILITY
You used to be able to get a good sense of most software's major functionality by strolling through all the menu options. Mastery (or at least proficiency) was straightforward. With the more dynamic paradigm Microsoft adopted along with the Ribbon, there's lots of functionality you don't even see until you're in a new situation (or that's hidden to the responsive window layout, which is ironic - instead of making the thing more compact, they made portions of it disappear if your window is too small). I grant some may argue this has benefits for not appearing as overwhelming to new users (although personally I've always found clean, uniform, well thought out menus to be less jarring than the scattered and more artistically inclined ribbon). But easing the learning curve had the trade off of making those users perceptually stuck in "beginner" mode. They can't customize the ribbon as meaningfully (I used to always tailor the toolbar by removing all the icons I already knew the keyboard shortcuts for, adding some buttons that were missing like Strikethrough, and move it to the same row as the menu bar to maximize clientarea space)
In my case, after trying out the new versions for a year, I made an intentional decision to go back to the 2003 versions of Word and Excel, and never look back (forward?). They are my daily drivers. These days, I barely touch modern versions of Word and Excel, except for the very rare instance I actually need a specific new feature (i.e. a spreadsheet with more than 65k rows). If someone asks me to use the new version, I simply refuse (which has never been a showstopper - my work quality is preeminent, and once you get past policy bureaucracy it turns out clients/employers don't care what tool I use to get it done).
The whole point of a toolbar was always to be a place you could pin commands you want instant access to, just a click away. The ribbon shredded that paradigm, and in my opinion took us a marked step backward in computing. It fails across several metrics, compared to regular toolbars. I wanted to blog about it at the time in hopes of convincing the world it was a mistake, but didn't have the free time. 20 years later, I'm curious if more people share these sentiments and acknowledge its shortcomings.
> So where the old toolbar used to hint you the keyboard shortcut in a tooltip every time you rested your mouse over a button, the new one doesn't
Although it is bad that it does not display the keyboard shortcuts, you can push ALT and then it will tell you which letter to push next. (I just guessed that pushing ALT might do something (possibly display a menu?), and I was correct (it did not display another menu, but it did help).) This is not quite as good as using the other keys such as CTRL, or numbered function keys, but it is possible.
(I do not use those programs on my own computer, but on some other computers I sometimes have to, and this helps, although not as well as it would to use menus and other stuff instead. However, in some cases I was able to use it because of knowledge of older versions of Microsoft Office; many of the keyboard commands are the same.)
I think the menu bar is much better, and toolbars should not be needed for most things. With the menu bar it will underline the letters to push with ALT and also will tell you what other keys to use (if any) for that command. (One thing that a toolbar is helpful for is to display status of various functions that can change, such as the current font. Due to that, you might still have a toolbar, but you do not need to put everything in the toolbar. Perhaps combine the toolbar with the status bar to make it compact.)
(Something else that would improve these word processing software would be the "reveal codes" like Word Perfect. A good implementation of reveal codes would avoid some of the problems of WYSIWYG. For spreadsheet software, arranging the grid into zones, and assigning properties (including formatting and formulas) to zones, and making references work with zones, etc, would be helpful, but I don't know that any existing software does that.)
In my own software I do try to make the display compact so that there is more room for other stuff, instead of needing to put all of the commands and other stuff taking up too much space in the screen. Good documentation is helpful to make it understandable; this would work much better than trying to design the software to not need documentation, since then the lack of doumentation makes it difficult to understand.
I see them crop up everywhere. IMO, they are decidedly human-unfriendly - particularly to programmers and database admins trying to debug issues. Too many digits to deal with, and they suck up too much column width in query results, spreadsheets, reports, etc.
I'm not saying they don't have a place (e.g. when you have a genuine need to generate unique identifiers across completely disconnected locations, and the id's will generally never need to be dealt with by a human). But in practice they've been abused to do everything under the sun (filenames, URL links, user id's, transaction numbers, database primary keys, etc). I almost want to start a website with a gallery of all the examples where they've been unsuitably shoehorned in when just a little more consideration would have produced something more humane.
For most common purposes, a conventional, centralized dispenser is better. Akin to the Take-A-Number reels you see at the deli. Deterministic randomization is a thing if you don't want the numbers to count sequentially. Prefixes, or sharding the ID space, is also a thing, if you need uniqueness across different latency boundaries (like disparate datacenters or siloed servers).
I've lost count of how many times I've seen a UUID generated when what the designer really should have done is just grab the primary key (or when that's awkward, the result of a GetNextId stored procedure) from their database.
reply