Hacker Newsnew | past | comments | ask | show | jobs | submit | treyd's commentslogin

> If by "end to end" you actually mean it's encrypted all the way to the server, that's just "encryption in transit".

This is what Zoom claimed was e2ee for a little while before getting in trouble for it.


This is what Google also claims as end to end encrypted in their Gmail end to end thing. Many people including me mentioned this in the comments.

https://news.ycombinator.com/item?id=45458482

Its entirely their end to their end encrypted. You don't get any privacy.


Use our new open source (modification and redistribution not permitted) app to exchange end-to-end encrypted (from your client to our server) messages with your friends! Having all your data on our service protects your data sovereignty (we do not provide for export or interop) by guaranteeing that you always have access to your full history! Usage also protects your privacy (we analyze your data for marketing purposes) by preventing unscrupulous third parties from analyzing your data for marketing purposes.

If we had competent regulators this sort of blatant willful negligence would constitute false advertising.


Is there a port of this to Emacs or integration with gptel?

Hi, not that I know of. Most of the code would not change. It could easily be ported to different editors. The core is the go server (`server/`).

It seems it would be possible to use this with minuet.el. I’m not familiar with it, though.

FreeBSD is not "compatible with Linux", it provides a way to run Linux applications under a Linux-like syscall environment. What you're suggesting is as if you could load Linux kernel modules into the FreeBSD kernel.

The issue with NT is the driver ecosystem. You'd have to reimplement a lot of under-documented NT behavior for NT drivers to behave themselves, and making that work within the Linux kernel would require major architectural changes. The userland is also where a lot of magic happens for application compatibility.


> What you're suggesting is as if you could load Linux kernel modules into the FreeBSD kernel.

Afaik, you partially can.


Intellectual property as it exists and is used today overwhelmingly is used to stifle competition and lock down monopolies. It's used to project power internationally by deputizing foreign countries to protect American business interests. It's a far cry from how it's popularly presented as a way for the "little guy" to protect their inventions.


I see you’ve never invented anything that you’ve risked having stolen


It's not stolen, you don't loose it if someone copy it. It's infringed. And personnally, as long as i am credited in the author file? I'm good.


If you can't use the correct terminology, then your entire statement is worthless.


GHA is based on Azure Actions. This is evident in how bad its security stance is, since Azure Actions was designed to be used in a more closed/controlled environment.


Code is typically run many more times than it's compiled, so this is a perfectly good tradeoff to make.


Absolutely, was not trying to claim otherwise. But since we're engineers (at least I like to see myself as one), it's worth always keeping in mind that almost everything comes with tradeoffs, even traits :)

Someone down the line might be wondering why suddenly their Rust builds take 4x the time after merging something, and just maybe remembering this offhand comment will make them find the issue faster :)


For release builds yes. For debug builds slow compile times kill productivity.


If you are not willing to make this trade then how much of a priority was run-time performance, really?


It's never the case that only one thing is important.

In the extreme, you surely wouldn't accept a 1 day or even 1 week build time for example? It seems like that could be possible and not hypothetical for a 1 week build since a system could fuzz over candidate compilation, and run load tests and do PGO and deliver something better. But even if runtime performance was so important that you had such a system, it's obvious you wouldn't ever have developer cycles that take a week to compile.

Build time also even does matter for release: if you have a critical bug in production and need to ship the fix, a 1 hour build time can still lose you a lot here. Release build time doesn't matter until it does.


A lot of C++ devs advocate for simple replacements for the STL that do not rely too much on zero-cost abstractions. That way you can have small binaries, fast compiles, and make a fast-debug kinda build where you only turn on a few optimizations.

That way you can get most of the speed of the Release version, with a fairly good chance of getting usable debug info.

A huge issue with C++ debug builds is the resulting executables are unusably slow, because the zero-cost abstractions are not zero cost in debug builds.


Unless one uses VC++, which can debug release builds.

Similar capabilities could be made available in other compilers.


Its not just the compiler - MSVC like all others has a tendency to mangle code in release builds to such an extent that the debug info is next to useless (which to be fair is what I asked it to do, not that I fault it).

Now to hate a bit on MSVC - its Edit & Continue functionality makes debug builds unbearably slow, but at least it doesn't work, so my first thing is to turn that thing off.


Which is why recent versions have dynamic debugging mode.


You can debug release builds with gcc/clang just fine. They don't generate debug information by default, but you can always request it ("-O3 -g" is a perfectly fine combination of flags).


Not really, because some optimizations get the step through and such rather confusing.

VC++ dynamic debugging pretends the code motion, inlining and similar optimizations aren't there and maps back to the original code as written.

Unless this has been improved for gdb,lldb.


Ah, I see what you mean.

GCC can now emit information that can be used to reconstruct the frame pointers for inlined functions: https://lwn.net/Articles/940686/ It's now filtering through various projects: https://sourceware.org/binutils/wiki/sframe

It will not undo _all_ the transformations, but it will help a lot. I used it for backtraces, and it fixed the missing frame issues for me.

This was possible with the earlier DWARF format (it's Turing-complete), and I think this is how VCC does it. Although I have not checked it.


I think this also massively depends on your domain, familiarity with the code base and style of programming.

I've changed my approach significantly over time on how I debug (probably in part due to Rusts slower compile times), and usually get away with 2-3 compiles to fix a bug, but spend more time reasoning about the code.


Doesn’t rust have incremental builds to speed up debug compilation? How slow are we talking here?


Rust does have incremental rebuilds, yes.

Folks have worked tirelessly to improve the speed of the Rust compiler, and it's gotten significantly faster over time. However, there are also language-level reasons why it can take longer to compile than other languages, though the initial guess of "because of the safety checks" is not one of them, those are quite fast.

> How slow are we talking here?

It really depends on a large number of factors. I think saying "roughly like C++" isn't totally unfair, though again, it really depends.


My initial guess would be "because of the zero-cost abstractions", since I read "zero-cost" as "zero runtime cost" which implies shifting cost from runtime to compile time—as would happen with eg generics or any sort of global properties.

(Uh oh, there's an em-dash, I must be an AI. I don't think I am, but that's what an AI would think.)


I used em dashes before AI, and won't stop now :)

That's sort of part of it, but it's also specific language design choices that if they were decided differently, might make things faster.


People do have cold Rust compiles that can push up into measured in hours. Large crates often take design choices that are more compile time friendly shape.

Note that C++ also has almost as large problem with compile times with large build fanouts including on templates, and it's not always realistic for incremental builds to solve either especially time burnt on linking, e.g. I believe Chromium development often uses a mode with .dlls dynamic linking instead of what they release which is all static linked exactly to speed up incremental development. The "fast" case is C not C++.


> I believe Chromium development often uses a mode with .dlls dynamic linking instead of what they release which is all static linked exactly to speed up incremental development. The "fast" case is C not C++.

Bevy, a Rust ECS framework for building games (among other things), has a similar solution by offering a build/rust "feature" that enables dynamic linking (called "dynamic_linking"). https://bevy.org/learn/quick-start/getting-started/setup/#dy...


There's no Rust codebase that takes hours to compile cold unless 1) you're compiling a massive codebase in release mode with LTO enabled, in which case, you've asked for it, 2) you've ported Doom to the type system, or 3) you're compiling on a netbook.


I'm curious if this is tracked or observed somewhere; crater runs are a huge source of information, metrics about the compilation time of crates would be quite interesting.


I know some large orgs have this data for internal projects.

This page gives a very loose idea of how we're doing over time: https://perf.rust-lang.org/dashboard.html


Down and to the right is good, but the claim here is the average full release build is only 2 seconds?


Those are graphs of averages from across the benchmarking suite, which you can read much more information about here: https://kobzol.github.io/rust/rustc/2023/08/18/rustc-benchma...


> the rooted user could use employee only methods. Somewhere or other every bank has methods that set balances on accounts.

Exposing these types of APIs in any way outside the bank ever would be gross negligence.


I had this idea during the pandemic 5 years ago now, and even did some of that work to figure out the variables I'd need to extract to make it work, but I never found the time/motivation to work on it for real. Really happy to see someone put in the effort.


The productive capacity for the goods we consume was built by average people. Billionaires only exist to skim off the top, and are not a required component of the process.


What does this have to do with the discussion?


Refusing the above claim that billionaires produce the "goods that we consume".


That's a misunderstanding. Compression algorithms are typically designed with a tunable state size paramter. The issue is if you have a large transfer that might have one side crash and resume, you need to have some way to persist the state to be able to pick up where you left off.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: