TRACKING: Only privacy-respecting essentials:
. Sliplane (European hosting) server logs
. No Google Analytics, no third-party trackers
What made you think there's tracking? I want to fix any privacy concerns immediately. This is a European digital sovereignty project & privacy is the whole point.
Can you share what triggered the concern? (Specific script/banner you saw?) Thanks for your help.
Clicking on "Customize" on the cookie consent banner reveals toggles for the following:
> Analytics Cookies
> Help us understand how visitors interact with our website. We use privacy-first analytics.
Tracking.
> Marketing Cookies
> Used to track visitors and show relevant advertisements.
Ads and more tracking.
> Preference Cookies
> Remember your preferences like language and theme settings.
Do these actually require separate consent, or can they be considered functional?
I would expect that a European digital sovereignty project in which privacy is the whole point would not have a cookie consent banner at all, because it would simply not use any non-functional cookies that would require it. I see the cookie banner as a sort of "mark of shame" that nefarious websites are forced to wear.
Also, I recall hearing that there were plans to make highlighting the "Accept All" button above the other options illegal, because it's a dark pattern that gets people to click the highlighted option more often.
Thank you for your persistence and pushback. Despite good intentions I fell into the boilerplate trap.
In the meantime I:
. removed the 'mark of shame' :)
. zeroed the cookies, only localstorage for umami analytics
. Simple opt-out in footer, no dark patterns, just 'learn more, opt-out'
. Updated the privacy policy to reflect this
Your feedback made this project better. Thank you :)
I understand the skepticism, but let me address this:
"LLM shovelware": The articles are curated from around 30 European news sources (TechCrunch Europe, Sifted, The Verge, etc.). AI is only used for:
1. Translation (EN→NL/DE/FR/ES/IT)
2. Pattern-based image generation
The curation, source selection, and quality filters are all manual.
"Self-promotion": Fair point on the account activity. I created this account specifically to share this project with HN because the community values European tech sovereignty and privacy.
Happy to answer specific questions about the implementation. The goal is NOT traffic farming, it's building a multilingual resource for European digital policy/startups.
Docker is unusable for build tools that use namespaces (of which Docker itself is one), unless you use privileged mode and throw away much more security than you'd need to. Docker images are difficult to reproduce with conventional Docker tools, and using a non-reproducible base image for your build environment seems like a rather bad idea.
Why not? Doesn't it depend on the type of NAT used?
As I understand it, most consumer devices will set up a port mapping which is completely independent of the destination's IP and port. It's just "incoming packet for $wanip:567 goes to $internal:123, outgoing packet from $internal:123 get rewritten to appear from $wanip:567". This allows any packet towards $wanip:567 to reach the internal host - both the original server the client initiated the connection to, and any other random host on the internet. Do this on two clients, have the server tell them each other's mappings, and they can do P2P comms: basic hole punching. I believe this is usually called "Full Cone NAT".
However, nothing is stopping you from setting up destination-dependent mapping, where it becomes "incoming packet from $server:443 to $wanip:456 goes to $internal:123, outgoing packet from $internal:123 to $server:443 gets rewritten to appear from $wanip:567". This would still work totally fine for regular client-to-server communication, but that mapping would only work for that specific server. A packet heading towards $wanip:456 would get dropped because the source isn't $server:443 - or it could even get forwarded to another host on the NATed network. This would block traditional hole punching. I believe this is called "Address Restricted Cone NAT" if it filters only on source IP, or "Port Restricted Cone NAT" if it filters on both source IP and source port.
If your NAT allows arbitrary connections out, and you're patient enough, you can probably finagle a peer to peer connection, eventually. Here's a discussion about that [1]. But that math is based on each NAT having a single external address; if your NAT spreads you over multiple addresses, the math gets much worse.
And there's a lot of other considerations; chances are your NAT won't be happy if you send all those probe packets at once, and your user may not be either. It's probably only worth it to do exhaustive probing if the connection is long lived, and proxying is expensive (in dollars because of bandwidth or in latency)
The feasibility of this assumes one peer is always behind an endpoint dependent mapping. That's great if you only care about peers working with you and you control your style of NAT, but it's still pretty broken for the case you want this to work for any 2 peers. In practical terms, the success rate goes from something like the 64% with 256 probes down to something less than 0.01%.
If you can manage to bump it up to 65536 probes without getting blocked, hitting a NAT limit, or causing the user to fall asleep waiting, then it should hit the same success rate :D. I'm not sure many would like to use that P2P service though, at that point just pay for the TURN server.
64k probes is a lot, but it might be reasonable if you're trying to get something like wireguard connected between locations that are behind CGNAT, send 10 probes a second for a couple hours and then remain connected for a long time. Of course, CGNAT might split your traffic over multiple IPs and then the math is terrible.
If you need to send 64k probes to get p2p and you want to make a 15 minute call, it probably doesn't make sense, but it's probably worth trying a bit in case you catch an easy case. Not that p2p is always better than going through a relay, but it's often less expensive.
The math doesn't quite work that conveniently in that at least one side needs to actually initiate (and keepalive) 65k sessions through their NAT while the other tests 10 of those ports at a time. If you just do 10 at a time both sides until you've done 65k total you end up with even worse odds than having just done 256 at once, due to the Birthday Paradox nature of the problem.
For wireguard that might be fine because you likely control the head end and opening ~65k NAT sessions is something you can opt to do if you tune things accordingly. Of course, in that case, you can also just opt to use the more lenient form of NAT at your head end and just use attempt with 256 ports instead.
Fair enough, I didn't go through the math. I don't think many NATs are realistically likely to let a single client run 64k sessions.
ISPs are increasingly putting customers behind CGNAT, so wireguard at home doesn't imply control over NAT policies. Especially new entrants and fixed wireless ISPs don't tend to have the resources to get an IP (v4) for every customer, and some of them don't offer v6 either, so having some form of hope would be nice.
Try doing it over a network that only allows connections through a SOCKS/Squid proxy, or on a network that uses CG-NAT (i.e., double-NAT).
See also:
> UDP hole punching will not work with symmetric NAT devices (also known as bi-directional NAT) which tend to be found in large corporate networks. In symmetric NAT, the NAT's mapping associated with the connection to the known STUN server is restricted to receiving data from the known server, and therefore the NAT mapping the known server sees is not useful information to the endpoint.
TCP Simultaneous Open. If two clients happen to connect to each other's ephemeral ports at exactly the same moment, they connect to each other with no server involved. It should work the same as UDP hole punching but with a much smaller time window.
"Unfortunately, no matter how hard you try, there is a certain percentage of nodes for whom hole punching will never work. This is because their NAT behaves in an unpredictable way. While most NATs are well-behaved, some aren’t. This is one of the sad facts of life that network engineers have to deal with."
In this scenario, the article goes on to describe a convention relay-based approach.
I would guess that most consumer routers are very cooperative as far as hole punching because it's pretty critical functionality for bittorrent and many online games. Corporate firewalls wouldn't be as motivated to care about those use-cases or may want to actively block them.
I think parents point is a bit like "you can't disallow lock picking"; the term "hole punching" being used to describe techniques that are intentionally trying to bypass whatever thing others (particularly corporations) try to put in the way, sometimes for good reasons and sometimes for kind of shit reasons.
reply