Based on the origins of Rust as a tool for writing the really thorny, defensive parsers of potentially actively hostile code for firefox, I have to imagine that another web browser is the most at-home place the language could ever be.
I feel like the lag-time of communication was an important component of older forms of communication that has been lost. That's not to say that fast communication isn't a boon to society, of course. Only that slower communication gives you more flexibility in how you respond, and more time to think about what your response should be.
When the main form of long distance communication was the postal system, and letters took days to travel from sender to receiver, you could easily wait days, if not weeks, to draft up your reply and mail it out. The recipient on the other end wouldn't even be able to discern the difference between your delay and the delay from the postal network itself. It had some in-built slack.
When the only phones were landlines, if someone called you and you knew you were in a bad mood, the kind of bad mood that would invariably make you say something stupid, you could just not pick up! There were plenty of common, understandable reasons someone wouldn't be available to answer their landline. Then they could leave you a message, and you could call back when you mood improved again. Again, there was slack built into the system.
Now there's this cultural expectation that puts far more attention on your reaction speed. A text message with no immediate response could just be them not seeing it immediately... But actually no! Now we have read receipts too! You can't even pretend to have not seen it yet while you think of your reply. Some platforms even have the little "currently typing" indicator tell them how long you've spent drafting and re-drafting whatever message you ended up sending. A panopticon of communication. Now there's no slack. Any person anywhere in the world could try and get a hold of you with the same expectation of immediacy that a face-to-face conversation would supply.
Now of course, not every single person I might text, call, or send an email to, will have the same expectations for what is an appropriate degree of responsiveness. But, (speaking from my personal experience) I am absolutely miserable at reading that from social clues. I am left having to assume that, in the absence of some clear indicator to the contrary, whoever I am writing to will actually have rather strict expectations, and that allowing myself to be lax may very well give them a terrible opinion of me. (Though, the degree to which their opinion of me actually matters is a different question entirely!)
> I am left having to assume that, in the absence of some clear indicator to the contrary, whoever I am writing to will actually have rather strict expectations
This is self-defeating. You have the option (and I recommend it) to intentionally adopt the opposite assumption:
Zero communication is urgent, unless explicitly described as such.
It might be appropriate to make exceptions for certain people. Parents, partners, children. Maybe some work people during a crunch. Maybe some friends going through difficult times.
And still, we apologised ('I hope this find you well' and so on). It's cruft, it's slack, and it's social. We need some anchors to hang our message on. We know when it's necessary and when it isn't, and by breaking conventions we relay intent ('sorry not sorry').
If imposition was something for this site to add, I'd recommend doing it through LaTeX with the pdfpages package[1]. You generate the pdf normally, then re-lay it out using a second latex file dedicated to just doing the imposition. It's how I've done all of my imposition so far, and its more than powerful enough to do the kind of simple page layout that you would want to do with a home printer.
Maybe more complex layout might be needed if you happened to have a printer that could handle like, A0 size paper, or continuous rolls, which would give more flexibility in terms of the number of ways you could fit your pages onto the stock material. for the hobbyist though? More than good enough.
yeah, same here, I was like "wow what an interesting side to their business, a whole operating system intended to serve public and academic libraries!"
I think parent poster was referring to an actual library, i.e. where you would borrow books.
That's also what I thought this was, and came to the comments expecting to see something neat about why libraries might need bespoke operating systems.
Ah right! Yeah, I did think that too..., like locked down so random patrons couldn't do this or that. I was thinking that was quite a pivot for MS though too...
I was in the market for a vinyl cutter/knife plotter a while back, and the fact I use linux on everything was my main reason for avoiding Cricut. Ended up finding out theres an open source inkscape plugin that interfaces with the silhouette brand of knife plotters.
Not having to use the proprietary jank software is so nice, its a value-add over the cricut just to not have to use their software.
so it isn't direct? That's what you're saying. You're saying that there's two options for how to map any property of structured data. That's bad, you know that right? There's no reason to have two completely separate, incompatible ways of encoding your data. That's a good way to get parsing bugs. That's just a way to give a huge attack surface for adversarially generated serialized documents.
Also, self documentation is useless. A piece of data only makes sense within the context of the system it originates from. To understand that system, I need the documentation for the system as a whole anyway. If you can give me any real life situation where I might be handed a json/xml/csv/etc file without also being told what GENERATED that file, I might be willing to concede the point. But I sure can't think of any. If I'm writing code that deserializes some data, its because I know the format or protocol I'm interested in deserializing already. You cant write code that just ~magically knows~ how its internal representation of data maps to some other arbitrary format, just because both have a concept of a "person" and a concept of a "name" for that person.
The problem with tags in XML isn't that they are verbose its that putting the tag name in the closing tag makes XML a context-sensitive grammar which are NIGHTMARES to parse in comparison to context-free grammars.
Comments are only helpful when I'm directly looking at the serialized document. and again, that's only gonna happen when I'm writing the code to parse it which will only happen when I also have access to the documentation for the thing that generated it.
"tooling that can verify correctness before runtime" what do you even mean. Are you talking like, compile time deserialization? What serialized data needs to be verified before runtime? Parsing Is Validation, we know this, we have known this for YEARS. Having a separate parsing and validation step is the way you get parsing differential bugs within your deserialization pipeline.
Admittedly, I try and stay away from database design whenever possible at work. (Everything database is legacy for us) But the way the term is being used here kinda makes me wonder, do modern sql databases have enough security features and permissions management systems in place that you could just directly expose your database to the world with a "guest" user that can only make incredibly specific queries?
Cut out the middle man, directly serve the query response to the package manager client.
(I do immediately see issues stemming from the fact that you cant leverage features like edge caching this way, but I'm not really asking if its a good solution, im more asking if its possible at all)
There are still no realistic ways to expose a hosted SQL solution to the public without really unhappy things occurring. It doesn't matter which vendor you pick.
Anything where you are opening a TCP connection to a hosted SQL server is a non-starter. You could hypothetically have so many read replicas that no one could blow anyone else up, but this would get to be very expensive at scale.
Something involving SQLite is probably the most viable option.
I personally think that this is the future, especially since such an architecture allows for E2E encryption of the entire database.
The protocol should just be a transaction layer for coordinating changes of opaque blobs.
All of the complexity lives on the client.
That makes a lot of sense for a package manager because it's something lots of people want to run, but no one really wants to host.
There's no need to have a publicly accessible database server, just put all the data in a single SQLite database and distribute that to clients. It's possible to do streaming updates by just zipping up a text file containing all the SQL commands and letting clients download that. Or even a more sophisticated option is eg Litestream.
I will stick with Firefox due to multi account containers. Chrome does not offer a comparable alternative, and this extension makes working with AWS significantly easier.
To me blink as a render engine is too closely coupled to Google. Even though technically chromium is disconnected and open source, the amount of leverage Google has is too high.
I dread the possibility that gecko and webkit browsers truly die out, and the single biggest name in web advertising has unilateral sway over the direction of web standards.
A good example of this is that through the exclusive leverage of Google, all blink based browsers are phasing out support for Manifest V2. A widely unpopular, forcing change. If I'm using a blink based browser I become vulnerable to any other profit motivated changes like that one.
Mozilla might be trying their hardest to do the same with this AI shlock, but if I have to choose between the trillion dollar market cap dictator of the internet and the little kid playing pretend evil billionaire in their sandbox? Well, Mozilla is definitely the less threatening of the two in that regard.
But notably not lack of competing browser engines as the power of these decisions come from the product and not the open source libraries the product uses.
It's not based on cryptocurrency, there are just extra features that use it. Unstoppable domains is an optional feature. You don't need to visit them, but it gives value to people by letting them actually own their domain instead of leasing it from ICANN. Viewing ads to earn BAT is an optional feature. As I mentioned ad blocking is built in so you can have it show no ads if you want.
Not OP, but personally I very much prefer Firefox font rendering on Windows. Text in Chromium based browser looks blurry to me, which causes eye strain. Firefox also has a much sharper and better looking image down-scaling algorithm that again looks blurry in Chromium based browsers.
Have you used chrome? The depth of enshittifaction is staggering. Setting it up from scratch is like watching a Cory Doctorow documentary.
The only change that’d get me to willingly use the engine would be the DOJ mandating the return of manifest v2 support and then barring google from contributing to it for the next 40 years.
This feels like a very indirect way of saying "yes the fourier transform of a signal is a breakdown of its component frequencies, but depending on the kind of signal you are trying to characterize for it might not be what you actually need."
Its not that unintuitive to imagine that if all of your signals are pulses, something like the wavelet transform might do a better job at giving you meaningful insights into a signal than the fourier transform might.
The thinking that sinus are basic building blocks and own frequencies is part of the problem. Fourier is a breakdown into frequencies of "sinus" waves. Sinus are fundamental in physics of some idealistic conditions, but using Sinus is just a choice, mathematically you could just as good use other bases. A triangle has mathematically the same right to own a frequency as a sinus.
Reality is often different from ideal and not that linear. So basic wave-forms often aren't really sinus. But people usually only know sinus, so they'll use this hammer on every nail. Some guys into electrical engineering maybe know about rectangles, but there's, not yet, enough deeper understanding out there for playing with the mathematical tools correctly.
Physics didn’t pick sinusoids because it “only knows about sinuses.” Physics picked them because the math forces them on you.
Actual engineers:
- use sinusoids because LTI systems respond, uh... linearly to them
- use square waves for digital logic
- use triangle waves for modulation
- use wavelets for compression or time/frequency localization
- use Hilbert transforms (and actually know what "orthonormal" means)
- use STFT, CWT, FFTs... and know exactly why Fourier works and when it breaks
reply