Hacker Newsnew | past | comments | ask | show | jobs | submit | josephg's commentslogin

At its heart, this is about Europe for Europe. People from other countries “contributing” technology solutions to European businesses and government is what got Europe into the strange mess they’re in now. And there’s been a long line of foreign - American - businesses which have promised that European data will always stay on European soil. And it’s quite clear that promise was not always kept.

I’m sure your desire to help is genuine. But Europe might need to find their own feet with an initiative like this before accepting help from foreigners.


I'd look at it in another way, hyperscalers exist due to code contributed from all around the world, often in the form of open source, Europe going closed and competing against the rest of the world (literally) isn't going to be a path forward.

Clients of mine are on hyperscalers due to the ease of deployment,etc but they are focused on lock-in, if ease could be attained in combination with portability then an ecosystem could exist where mid-scaler providers (that exists in abundance in Europe) could have a better chance against the behemoths.


> Europe going closed and competing against the rest of the world (literally) isn't going to be a path forward.

Yes, I think this might be the actual way to help. Write opensource software that can be used by everyone. Including commercial products in the EU.

Google, Microsoft and Amazon have a moat because of how difficult it is to build viable competitors to their products. I'd love to see more opensource libraries and applications chip away at this. How hard would it be to build a self hosted google docs competitor?


I believe this is one of the drivers for IBM Sovereign Core Announcement recently [0].

“ Technically, IBM Sovereign Core builds on open-source technology from the Red Hat ecosystem. The software uses OpenShift, among other things, and is designed to run on existing infrastructure. Organizations can deploy the platform in on-premises data centers, regional cloud environments, or through local service providers.”

* Disclaimer: I’m an IBMer

[0]. https://www.techzine.eu/news/privacy-compliance/137981/ibm-l...


> People from other countries “contributing” technology solutions to European businesses and government is what got Europe into the strange mess they’re in now.

Well, if Europe existed without them, then Europe likely wouldn't have ever home-grown all the advances from the more entrepreneurially-minded countries.


Very few people fall behind at the moment due to lack of access to information. People in poor countries largely have access to the internet now. It doesn’t magically make people educated and economically prosperous.

You are arguing the converse. Access to information doesn't make people educated, but lack of access definitely puts people at a big advantage. Chatbots are not just information, they are tools and using it needs training because they hallucinate.

I wonder if education will bifurcate back out as a result of AI. Small, bespoke institutions which insist on knowledge and difficult tests. And degree factories. It seems like students want the degree factory experience with the prestige of an elite institution. But - obviously - that can’t last long. Colleges and universities should decide what they are and commit accordingly.

I think the UK has been heading this way for a while -- before AI. Its not been the size of the institutions that has changed, but the "elite" universities tend to give students more individual attention. A number of them (not just Oxford and Cambridge) have tutorial systems where a lot of learning is done in a small group (usually two or three students). They have always done this.

At the other extreme are universities offering low quality courses that are definitely degree factories. They tend to have a strong vocational focus but nonetheless they are not effective in improving employability. In the last few decades we have expanded the university system and there are far more of these.

There is no clear cutoff and a lot of variation in between so its not a bifurcation but the quality vs factory difference is there.


> 2FA is "something you have" (or ".. you are", for biometrics): it is supposed to prove that you currently physically posses the single copy of a token. The textbook example is a TOTP stored in a Yubikey.

No, 2FA means authentication using 2 factors of the following 3 factors:

- What you know (eg password)

- What you have (eg physical token)

- What you are (eg biometrics)

You can "be the 2FA" without a token by combining a password (what you know) and biometrics (what you are). Eg, fingerprint reader + password, where you need both to login.


Of course, but in most applications the use of a password is a given, so in day-to-day use "2FA" had come to mean "the other auth method, besides your password".

Combine that with the practical problems with biometrics when trying to auth to a remote system, and in practice that second factor is more often than not "something you have". And biometrics is usually more of a three-factor system, with the device you enrolled your fingerprints on being an essential part of the equation.


This.

GP ignores the conventions of the field.


I search for stuff all the time. But full disk search just never seems to solve the problems I have. Whatever keyword I’m looking for will inevitably show up in thousands of unrelated header files, Python files and JavaScript files in various node_modules directories and whatnot. Search in finder (or spotlight) is always way too noisy to actually do what I want it to do. Spending hours of cpu time to build that a useless index is deeply disrespectful.

The typical find oneliner to do a fulltext search invokes sed. sed supports regular expressions, so you can do quite a bit more than just a simple text match. And you can also invoke various filter chains on the results.

I want spotlight to open applications and system settings. But full disk indexing makes spotlight basically useless for that, because its index is filled with crap. Instead of opening zed I accidentally open some random header file that’s sitting around in Xcode. It’s worse than useless. And that says nothing of the grotesque amount of cpu time spotlight wants to spend indexing all my files.

A feature I never wanted has ruined a feature I do want. It’s a complete own goal. In frustration I turned spotlight off completely a few months ago. Rip.


I think it's been said in this thread already, but it sounds like what you want is Alfred https://www.alfredapp.com/ it's a great app, use it every few minutes every day.

also, for opening apps, https://charmstone.app/ is pretty great.


I am also in OP's boat and, even though these are great suggestions, personally I would like to be able to do a basic thing such as opening an app with a built-in way rather than having to download yet another app to do that. Every major macos update I have to worry about spotlight reindexing things.

What I find really annoying with macos is that with stock/default settings it is the worst UX. You have to download an app to launch apps, an app to move and resize windows, an app to reverse the mouse's wheel direction to be the opposite of the trackpad, an app to manage the menu bar (esp decrease the spacing, so that you can fit items up until the notch). Then, you also need anyway to spend an hour tweaking settings and run custom commands (such as `defaults write -g ApplePressAndHoldEnabled -bool false` so that you can actually type stuff like aaaaaaaaaaaaaaaaaaaaa). These are just needed to make using macos bearable, and do not include any kind of "power user" kind of stuff.

I used to hate macos before getting my own mac, because I had to use some at work in their default settings and it was just a horrible experience.


If some process is going to take hours of cpu time, it should be opt in. At a minimum I’d like to be able to turn the bloody things off if I don’t want them.

I run cpu usage meters in my menu bar. The efficiency cores always seem busy doing one thing or another on modern macOS. It feels like Apple treats my e-cores as a playground for stupid features that their developers want a lot more than I do - like photoanalysisd, or file indexing to power spotlight, that hasn’t worked how I want it to for a decade.

I have a Linux workstation, and the difference in responsiveness is incredible. Linux feels like a breath of fresh air. On a technical level, my workstation cpu is barely any faster. But it idles at 0%. Whenever I issue a command, I feel like the computer has been waiting for me and then it springs to action immediately.

To your point, I don’t care why these random processes are using all my cpu. I just want them to stop. I paid good money for my Apple laptop. The computer is for me. I didn’t pay all that money so some Apple engineer can vomit all over with their crappy, inefficient code.


> You can download the app as an .apk from their website if you don't trust Google Play Store.

I wish apple & google provided a way to verify that an app was actually compiled from some specific git SHA. Right now applications can claim they're opensource, and claim that you can read the source code yourself. But there's no way to check that the authors haven't added any extra nasties into the code before building and submitting the APK / ios application bundle.

It would be pretty easy to do. Just have a build process at apple / google which you can point to a git repo, and let them build the application. Or - even easier - just have a way to see the application's signature in the app store. Then opensource app developers could compile their APK / ios app using github actions. And 3rd parties could check the SHA matches the app binaries in the store.


This is what F-droid does (well, I suspect most apps don't have reproducable builds that would allow 3rd-party verification), but Signal does not want 3rd-party builds of their client anyhow.

They could still figure out a way to attest their builds against source.

This is much harder when Signal actively goes against that.

A few years ago I pulled a rust library into a swift app on ios via static linking & C FFI. And I had a tiny bit of C code bridge the languages together.

When I compiled the final binary, I ran llvm LTO across all 3 languages. That was incredibly cool.


It is?? Can you give some examples of high performance stuff you can do using C++'s template system that you can't do in rust?

They are likely referring to the scope of fine-grained specialization and compile-time codegen that is possible in modern C++ via template metaprogramming. Some types of complex optimizations common in C++ are not really expressible in Rust because the generics and compile-time facilities are significantly more limited.

As with C, there is nothing preventing anyone from writing all of that generated code by hand. It is just far more work and much less maintainable than e.g. using C++20. In practice, few people have the time or patience to generate this code manually so it doesn't get written.

Effective optimization at scale is difficult without strong metaprogramming capabilities. This is an area of real strength for C++ compared to other systems languages.


Again, can you provide an example or two? Its hard to agree or disagree without an example.

I think all C++ wild template stuff can be done via proc macros. Eg, in rust you can add #[derive(Serialize, Deserialize)] to have a highly performant JSON parser & serializer. And thats just lovely. But I might be wrong? And maybe its ugly? Its hard to tell without real examples.


Rust doesn't allow specialization and likely never will because it's unsound https://www.reddit.com/r/rust/comments/1p346th/specializatio... has a couple of nice comments about it.

But yes it's basically

template <typename T, size_t N> class Example { vector<T> generic; };

template<> class Example<int32_t, 32> { int bitpackinhere; }


Specialization isn’t stable in Rust, but is possible with C++ templates. It’s used in the standard library for performance reasons. But it’s not clear if it’ll ever land for users.

> As with C, there is nothing preventing anyone from writing all of that generated code by hand. It is just far more work and much less maintainable than e.g. using C++20.

It's also still less elegant, but compile time codegen for specialisation is part of the language (build system?) with build.rs & macros. serde makes strong use of this to generate its serialisation/deserialisation code.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: