Hacker Newsnew | past | comments | ask | show | jobs | submit | MiddleMan5's commentslogin

It should be noted here that the Evil Bit proposal was an April Fools RFC https://datatracker.ietf.org/doc/html/rfc3514


While we're at it, it should be noted that Do Not Track was not, apparently, a joke.

It's the same as a noreply email, if you can get away with sticking your fingers in your ears and humming when someone is telling you something you don't want to hear, and you have a computer to hide behind, then it's all good.


There should be a law against displaying a cookie consent box to a user who has their Do Not Track header set.


Not all that far-fetched, Global Privacy Control is legally binding in California.

https://en.wikipedia.org/wiki/Global_Privacy_Control

https://news.ycombinator.com/item?id=43377867


How is "Do Not Track" is a joke, but website presenting a button "Do not use cookies" is not? What's the difference?


It is ridiculous, but it is what you get when you have conflicting interests and broken legislation. The rule is that tracking has to be opt-in, so websites do it the way they are more likely to get people to opt in, and it is a cookie banner before you access the content.

Do-not-track is opt-out, not opt-in, and in fact, it is not opt-anything since browsers started to set it to "1" by default without asking. There is no law forcing advertisers to honor that.

I guess it could work the other way: if you set do-not-track to 0 (meaning "do-track"), which no browser does by default, make cookies auto-accept and do not show the banner. But then the law says that it should require no more actions to refuse consent than to consent (to counter those ridiculous "accept or uncheck 100 boxes" popups), so it would mean they would also have to honor do-not-track=1, which they don't want to.

I don't know how legislation could be unbroken. Users don't want ads, don't want tracking, they just want the service they ask for and don't want to pay for it. Service providers want exactly the opposite. Also people need services and services need users. There is no solution that will satisfy everyone.


Labor laws are not set to satisfy everyone, they are set such that a company cannot use it’s outsized power to exploit their workers, and that workers have fair chance at negotiating a fair deal, despite holding less power.

Similarly consumer protection laws—which the cookie banners are—are not set to satisfy everyone, they are set such that companies cannot use their outsized power to exploit their customers. A good consumer protection law will simply ban harmful behavior regardless of whether companies which engage in said harmful behavior want are satisfied with that ban or not. A good consumer protection law, will satisfy the user (or rather the general public) but it may satisfy the companies.


Good consumer protection laws are things like disclosure requirements or anti-tying rules that address information asymmetries or enable rather than restrict customer choice.

Bad consumer protection laws try to pretend that trade offs don't exist. You don't want to see ads, that's fine, but now you either need to self-host that thing or pay someone else money to do it because they're no longer getting money from ads.

There is no point in having an opt in for tracking. If the user can be deprived of something for not opting in (i.e. you can't use the service) then it's useless, and if they can't then the number of people who would purposely opt in is entirely negligible and you ought to stop beating around the bush and do a tracking ban. But don't pretend that's not going to mean less "free stuff".

The problem is legislators are self-serving. They want to be seen doing something without actually forcing the trade off that would annihilate all of these companies, so instead they implement something compromised to claim they've done something even though they haven't actually done any good. Hence obnoxious cookie banners.


That whole argument assumes that you as a consumer can always find a product with exactly the features you want. Because that's a laughable fiction, there need to be laws with teeth to punish bad behaviors that nearly every product would indulge in otherwise. That means things like requiring sites to get permission to track, and punishing those that track users without permission. It's a good policy in theory, but it needs to be paired with good enforcement, and that's where things are currently lacking.


> That's whole argument assumes that you as a consumer can always find a product with exactly the features you want. Because that's a laughable fiction

There are very many industries where this is exactly what happens. If you want a stack of lumber or a bag of oranges, it's a fungible commodity and there is no seller who can prevent you from buying the same thing from someone else if you don't like their terms.

If this is ever not the case, the thing you should be addressing is that, instead of trying to coerce an oligopoly that shouldn't exist into behaving under the threat of government penalties rather than competitive pressure. Because an uncompetitive market can screw you in ten thousand different ways regardless of whether you've made a dozen of them illegal.

> That means things like requiring sites to get permission to track, and punishing those that track users without permission. It's a good policy in theory, but it needs to be paired with good enforcement, and that's where things are currently lacking.

It's not a good policy in theory because the theory is ridiculous. If you have to consent to being tracked in exchange for nothing, nobody is going to do that. If you want a ban on tracking then call it what it is instead of trying to pretend that it isn't a ban on the "free services in exchange for tracking data" business model.


I think you might be misunderstanding the purpose of consumer protection. It is not about consumer choice, but rather it is about protecting consumer from the inherent power imbalance that exists between the company and their customers. If there is no way to doing a service for free without harming the customers, this service should be regulated such that no vendor is able to provide this service for free. It may seem punishing for the customers, but it is not. It protects the general public from this harmful behavior.

I actually agree with you that cookie banners are a bad policy, but for a different reason. As I understand it there are already requirements that the same service should also be available to opt-out users, however as your parent noted, enforcement is an issue. I, however, think that tracking users is extremely consumer hostile, and I think a much better policy would be a simple ban on targeted advertising.


> I think you might be misunderstanding the purpose of consumer protection. It is not about consumer choice, but rather it is about protecting consumer from the inherent power imbalance that exists between the company and their customers.

There isn't an inherent power imbalance that exists between the company and their customers, when there is consumer choice. Which is why regulations that restrict rather than expand consumer choice are ill-conceived.

> If there is no way to doing a service for free without harming the customers, this service should be regulated such that no vendor is able to provide this service for free.

But that isn't what those regulations do, because legislators want to pretend to do something while not actually forcing the trade off inherent in really doing the thing they're only pretending to do.

> I, however, think that tracking users is extremely consumer hostile, and I think a much better policy would be a simple ban on targeted advertising.

Which is a misunderstanding of the problem.

What's actually happening in these markets is that we a) have laws that create a strong network effect (e.g. adversarial interoperability is constrained rather than required) which means that b) the largest networks win, and the networks available for free then becomes the largest.

Which in turn means you don't have a choice, because Facebook is tracking everyone but everybody else is using Facebook, which means you're stuck using Facebook.

If you ban the tracking while leaving Facebook as the incumbent, two things happen. First, those laws are extremely difficult to enforce because neither you nor the government can easily tell what they do with the information they inherently get from the use of a centralized service, so they aren't effective. And second, they come up with some other business model -- which will still be abusive because they still have market power from the network effect -- and then get to blame the new cash extraction scheme on the law.

Whereas if you do what you ought to do and facilitate adversarial interoperability, that still sinks their business model, because then people are accessing everything via user agents that block tracking and ads, but it does it while also breaking their network effect by opening up the networks so they can't use their market power to swap in some new abusive business model.


I am not a legislator, nor an expert in consumer law, and there is no way I could think of a regulation against targeted advertising, but that doesn’t mean it is impossible. I think claiming it to be impossible demonstrate a lack of imagination. And I would even think some consumer protection, or privacy advocacy groups have already drafted some legislation outlines for regulating targeted ads (as I said, I’m not an expert, and wouldn’t even know where to begin looking for one [maybe the EFF?]).

> There isn't an inherent power imbalance that exists between the company and their customers

That is very simplistic, and maybe idealistic from an unrealistic view of free-market capitalism. But there is certainly an inherent power imbalance. Before leaded gasoline was banned, it was extremely hard for environmentally conscious consumer to make the ethical choice and buy unleaded gasoline. Before seatbelts were required, a safety aware consumer might still have bought a car without one simply because the cars with seatbelts were either unavailable or unaffordable. Those aren’t real choices, but rather choices which are forced onto the consumer as a result of the competitive environment where the consumer hostile option generates much more revenue for the company.


> I am not a legislator, nor an expert in consumer law, and there is no way I could think of a regulation against targeted advertising, but that doesn’t mean it is impossible.

The hard part isn't the rule, it's the enforcement.

To begin with, banning targeted advertising isn't really what you want to do anyway. If you have a sandwich shop in Pittsburgh and you put up billboards in Pittsburgh but not in Anchorage, you're targeting people in Pittsburgh. If you sell servers and you buy ads in a tech magazine, you're targeting tech people. I assume you're not proposing to require someone who wants to buy ads for their local independent pet store to have nearly all of them shown to people who are on the other side of the country?

What you're really trying to do is ban the use of individualized tracking data. But that's extremely difficult to detect, because if you tell Facebook "show this ad to people in Miami", how do you know if it's showing them to someone because they're viewing a post likely to be popular with people in Miami in general vs. because the company is keeping surveillance dossiers on every individual user?

The only thing that actually works is for them not to have the data to begin with. Which is the thing where you have to empower user agents to provably constrain what information services have about their users, i.e. adversarial interoperability.

> That is very simplistic, and maybe idealistic from an unrealistic view of free-market capitalism.

It's a factual description of competitive markets.

> Before leaded gasoline was banned, it was extremely hard for environmentally conscious consumer to make the ethical choice and buy unleaded gasoline.

The ban on leaded gasoline isn't a consumer protection regulation, it's an environmental regulation. Gas stations weren't selling leaded gasoline in spite of customers preferring unleaded, they were selling it because it was cheaper to make and therefore what customers preferred in the absence of a ban. It's a completely different category of problem and results from an externality in which the seller and the buyer both want the same thing but that thing harms some third party who isn't participating in the transaction.

> Before seatbelts were required, a safety aware consumer might still have bought a car without one simply because the cars with seatbelts were either unavailable or unaffordable.

This is how safety features evolve.

Seat belts were invented in the 19th century but we didn't start getting strong evidence of their effectiveness until the 1950s and 60s. Meanwhile that's the same period of time the US started building the interstate system with the corresponding increase in vehicle ownership, and therefore accidents.

So into the 1960s there was an increasing concern about vehicle safety, the percentage of cars offered with seat belts started increasing, and then Congress decided to mandate them -- which is what the market was already doing, because the customers (who are largely the same people as the voters) were demanding it.

That is a consistent trend. Things like that get mandated just as the majority of the market starts offering them, and then Congress swoops in to take credit for the benefit of what was already happening regardless.

What those laws really do is a) increase compliance costs (and therefore prices), and b) prohibit the minority of customers from buying something for specific reasons which is different than what the majority wants, because it's banned. For example, all cars are now required to have anti-lock brakes, but ABS can increase stopping distances on certain types of terrain. A professional driver who is buying a vehicle for specific use on those types of terrain is now prohibited from buying a vehicle without ABS on purpose even though it's known to cause safety problems for them.

> Those aren’t real choices, but rather choices which are forced onto the consumer as a result of the competitive environment where the consumer hostile option generates much more revenue for the company.

That type of choice is the thing that specifically doesn't happen in a competitive market, because then the consumer goes to a competitor.

Where it does happen is in uncompetitive markets, but in that case what you need is not to restrict the customer's choices, it's to increase competition.


> since browsers started to set it to "1" by default without asking

IIRC IE10 did that, to much outcry because it upended the whole DNT being an explicit choice; no other browser (including Edge) set it as a default.

There have been thoughts about using DNT (the technical communication mechanism about consent/objection) in correlation with GDPR (the legal framework to enforce consent/objection compliance)

https://www.w3.org/blog/2018/do-not-track-and-the-gdpr/

The GDPR explicitly mentions objection via technical means:

> In the context of the use of information society services, and notwithstanding Directive 2002/58/EC, the data subject may exercise his or her right to object by automated means using technical specifications.

https://law.stackexchange.com/a/90002

People like to debate as to whether DNT itself has enough meaning:

> Due to the confusion about this header's meaning, it has effectively failed.

https://law.stackexchange.com/a/90004

I myself consider DNT as what it means at face value: I do not want to be tracked, by anyone, ever. I don't know what's "confusing" about that.

The only ones that are "confused" are the ones it would be detrimental to i.e the ones that perform and extract value from the tracking, and make people run in circles with contrived explanations.

It would be perfectly trivial for a browser to pop up a permission request per website like there is for webcams or microphone or notifications, and show no popup should I elect to blanket deny through global setting.


For one, Do Not Track is on the client side and you just hope and pray that the server honors it, whereas cookie consent modals are something built by and placed in the website.

I think you can reasonably assume that if a website went through the trouble of making such a modal (for legal compliance reasons), the functionality works (also for legal compliance reasons). And, you as the client can verify whether it works, and can choose not to store them regardless.


> And, you as the client can verify whether it works

How do you do that? Cookies are typically opaque (encrypted or hashed) bags of bits.


Just the presence or absence of the cookie.


I would assume most websites would still set cookies even if you reject the consent, because the consent is only about not technically necessary cookies. Just because the website sets cookies doesn't tell you whether it respects you selection. Only if it doesn't set any cookies can you be sure, and I would assume that's a small minority of websites.


The goal with Do Not Track was legal (get governments to recognize it as the user declining consent for tracking and forbidding additional pop-ups) and not technological.

Unfortunately, the legal part of it failed, even in the EU.


Do Not Track had a chance to get into law, which if it did would be good that the code and standard were already in place.


I like the 128 bit strength indicator for how "evil" something is.


Curious, what sites would you recommend?


I use it for the same and usually have to ask it to infer the functionality from the interfaces and class/function descriptions. I then usually have to review the tests for correctness. It's not perfect but it's great for building a 60% outline.

At our company I have to switch between 6 or 7 different languages pretty regularly and I'm always forgetting specifics of how the test frameworks work; having a tool that can translate "intent to test" into the framework methods really has been a boon


This is rude and unhelpful. Instead of bashing on someone you could learn to ask questions and continue the conversation


Those replies are a dime a dozen. Unless they’re poignant, well thought out discussions on specific failures, they’re usually from folks that have an axe to grind against LLMs or are fearful that they will be replaced.


Sarcasm aside, I agree testing on populations raises a whole bunch of ethical and morality concerns.

Also how would we control for environmental health effects, or even interactions between multiple product variances. This kind of testing would be wildly expensive, pose potential public health risks, and the data collected would be coarse and noisy at best


This would only be of substances already approved for general use, and wouldn't expose anyone to any substance they wouldn't otherwise be buying/using.

The only difference is it very slightly adjusts the quantity - and does so in a way that is within existing allowed tolerances, so effectively this might already be happening, just we aren't collecting the results.


Wait.... You weren't being sarcastic? This suddenly isn't funny any more.


This comment is disrespectful and dismissive and goes against the community guidelines. Please refrain from personal attacks on other community members


For simple audio devices, maybe with a hardware revision, but it's unlikely the driver circuitry would be routed back to an analog sensing pin unless you were doing some closed loop feedback stuff


IIRC some Realtek cards did have the hardware to route it that way.


I mean you can literally plug a regular pair of headphones into a microphone port instead of a speaker port, then yell into them, and it'll record your voice.

But yes the Google buds are Bluetooth and use separate microphones to send recorded audio back to the device for voice calls, etc.


Why though? You can't route MAC because... ? Because ipv4 provides a higher entropy address? Because MAC is self-assigned and reduplication would require a higher level system? or just because we just don't use MAC addresses that way?

I'm certain there are reasons IP came to live alongside/on top of MAC, but saying you can't do multi-hop routing with it just isn't true. If all the technologies of the Internet were reset tomorrow, how might you design the perfect layer 2 addressing and routing system?


MACS are random. Given a MAC and a connection to a LAN, you can easily answer the question, "is there a station with that MAC here". If it's not here, and you have a single gateway to another network, you can figure out that to talk to that MAC, you need to go over a gateway. And then things eventually go funny. We hit a network that talks to four others. It has no idea where to send the packet destined for that MAC. It could send it to all four (multicast). Then when a reply comes from one of them, remember that destination for next time. Remember for how long? Sending a packet to every destination will cause an exponential explosion of that packet throughout the network.

It works on small scales. We can stitch together a few LANs with ethernet switches. The switches initially forward everything to all ports, but learn where the MACs are so as to send frames only to ports where the destination MAC is known to be.

Ethernet switching won't scale to anywhere near the complexity of the Internet.


You can't route MAC because there is no prefix matching - only exact matching. That's exactly why you need to "switch" them... and incidentally this is what your proposal accomplishes – it's equivalent to a fully-switched network. Switches (especially L3 switches) maintain port-MAC association tables to switch packets between ports and they're available off the shelf.


IP addresses have structure because a single ISP buys a contiguous block, like 123.234.*.*. A simple routing table sends that whole block to a single network port.

The table required for the whole Internet is large, but not gigabytes.

You can't route by MAC-address because it's effectively random. You'd have to store the port number for every device separately. This works fine at LAN scale, but not for the whole Internet.


MAC addresses being random is a historical accident (because of hardware limitations). today we can define them in software. and just like we have link-local addresses we could self-assign link-local MAC addresses.

and i think the self assigning protocol in link-local could even go a step further. instead of hard coding a subnet, it could detect the subnet by copying the one from its nearest neighbor. so start with a random address, talk to neighbor to learn the subnet (and netmask) in use and switch to a new address within that subnet. then possibly run DHCP and update the address again. for static addresses DHCP could identify hosts by its cryptographic host key (like the one for SSH)

when two subnets join one of them may have to adjust its prefix. more complex, but still possible.

subnet prefixes could still be assigned to organizations to avoid overlap on a global level.

i am sure i am missing some details but i think in general this could work.


This sounds suspiciously close to re-inventing ARP and IP.


well, it's merging MAC and IP into one address. there is no need for two if the MAC address can be assigned dynamically. and it's extending the auto-discovery of the address to work over larger networks. so it's not reinventing but simplifying things. (or not, i am not familiar enough with the details to be aware of other problems that could complicate things again)


>You can't route by MAC-address because it's effectively random. You'd have to store the port number for every device separately. This works fine at LAN scale, but not for the whole Internet.

Not that I see any advantages to the approach but it's almost workable(?), if a little silly, at internet scale:

If every device had a 64byte ID, guesstimating 10billion people * 100 devices/head gets us a 'measly' 64TB of storage. Double that to include routing info gets us to ~128TB. A bit much to be practical, but not entirely insane either.


Nice maths. Would each router then hold 64TB and doing a lookup per request in that volume of data would be slow

Question: how does dns lookup differ from MAC lookup. Why is domain name lookup feasible, but not MAC?


the router needs to remember where each address goes. with MAC addresses being random, there is no shortcut. DNS is distributed and you look it up one subdomain level at a time, and that can be cached. same for IP, the router only needs to store the subnet for each destination, not all ip addresses.

a central lookup database for mac addresses (which could be distributed by having separate servers for a segment of the address space) doesn't make much sense because the distance of a server to the location of the device is to great and would make updates expensive.

so the router has to remember each address used. but at least it would not have to store all addresses in existence. actually, i think the storage needs are similar to those for NAT. well, except backbone routers which have to store a lot more.

the actual problem is the initial discovery of a MAC address. where does the routing information for a MAC address come from?

you need some peer finding protocols like DHT, and those are slower.


Because aggregation, summarization and continents are a thing. Also... there are things which speak IP and don't use Ethernet for underlying communications, specifically in the network carrier and high performance optical space.


0C:F9:31:D2:DB:51

AB:33:C6:C6:19:74

I used a MAC address generator to get those two, but I think two is enough to make the discussion. Current reality aside, would you be able to identify those with binary math as being on the same network device, different network devices, across the world? MAC addresses on physical NICs are provided by the manufacturer, sure you can adjust them but I think that leaves the good-faith portion of this discussion.

So if you wanted to have those to communicate no matter what you would have to have a network device state: "I'm network device A, I have this device 0C:F9:31:D2:DB:51" then another state: "I'm network device B, I have this device AB:33:C6:C6:19:74". Then whenever 0C:F9:31:D2:DB:51 wants to talk with AB:33:C6:C6:19:74 it's network device will have to just send it to the next upstream network device or if there are multiple network devices that could be upstream you could send it to them all which is just not great for security whatsoever or you now have to do a recursive lookup for whatever n devices might yet be upstream and wait for a response to see if one of those has it. Overall trying to send ethernet frames globally without an IP network sounds like not a great idea.


So it seems like the primary use of IP, as you describe, is to define a way to narrow the search to sub address groups so as to not require enumerating every address in the scheme.

Still, there's doesn't seem to be any reason you couldn't just say "device 1 gets MAC 00:00:00:00:00:01" and "device 2 gets 00:00:00:00:00:02" and the gateway controller gets :::00 and there's a special address on :::FF that can be used to talk to everyone...

Is that it? Is that all there is to IP? A loose pattern for reducing search scope, a couple reserved addresses for special cases, and a balance between address bitsize and total number of unique addresses (without requiring additional routing complexity)?

It all seems so... simple


You could. Assuming all your equipment supports setting the MAC, and you make sure to operate on prefixes so you can route by prefix. There's nothing stopping you from doing so.

The reason we don't is because at the time IP was introduced, there were many alternative physical layers in active use. And while Ethernet is near ubiquitous now, what we learnt from that was that it is unreasonable to assume that all your data will go over the same physical layer. And so you need a standard addressing format that will work elsewhere too.

Nothing stops you from stripping it back locally and using MAC addresses for everything internal to you, and ditching IP, and "just" gateway to/from IP. Lots of people did gateway between different protocols before IP became the dominant choice.

But you won't get everyone else to change because it'd require new firewall and new routers, and all kinds of software rewrites, and you can see how long the IPv6 transition has taken, so you'd still need to wrap and unwrap TCP/IP and find a way to address IP for everything that isn't 100% local, and even for lots of local-only stuff unless you want to rewrite everything.

There would be potential ways. E.g. you could certainly use a few bits to say "this is external" and then have some convention to pack an IPv4 address into the MAC or let an IPv6 address overflow into the data, and use that to make gatewaying and routing to external networks easier, while everything else just relies on the MAC. But you'd still need a protocol header for other things too, and then the question is how much benefit you would gain from ditching pretty much just ARP, which isn't exactly complex, a lookup table, and replacing the IPs in the header with just a destination MAC. Because the rest of the complexity is still there.

And you can gain most of the benefit of that by getting an IPv6 EUI64 address [1]. They'll work with "normal" IP equipment, and you can optimize in your own software by having the IP stack ditch ARP lookups when they see a local EUI64 address. Whether that optimisation actually makes a difference is another question.

[1] https://community.cisco.com/t5/networking-knowledge-base/und...


It starts out simple :-).

Then you realize doing some action ends up being O(n^2) so you add some workaround in your switch and cache some things. And you know what they say about cache invalidation. And vendor A implemented it wrong in 1993 so you have a special case for their systems. And then you want to handle abuse cases. And authentication. And you're competing against the whole rest of the world and your thing isn't enough better.


Then how do you send traffic to device1 on another network? You need globally unique addresses and hierarchy. Go back to the drawing board and come back when you’ve ended up inventing a worse IP protocol.

> It all seems so... simple

Because you haven’t even thought through basic use cases.


You would need to structure mac addresses in such a way that they can be easily grouped for routing a-la IP subnets.

It just isn’t suitable for this.


MAC is just one way to identify ("address") directly connected/visible nodes on a network. Not all L2 technologies use MAC addresses.

- "Directly connected/visibile" means node X can contact node Y simply by throwing something on the medium (wire, radio, etc.) and doesn't have to knowingly send to a middleman (router).

When Ethernet was invented in the early 80's there were a lot more L2 technologies. Most are uncommon now (Frame Link DLCIs I think fall in this category, and PPP/dialup was common at one time - no MACs there) except for one: I don't think the cellular network uses MAC addresses at all. I could be wrong with newer 4G/5G stuff which overlaps with Wi-Fi in various places.


> I'm certain there are reasons IP came to live alongside/on top of MAC

There were different teams/universities working on what today we would call LAN and WAN. I forget the details and history (I'm sure someone here, who was involved, could chime in, hah) and might have this wrong, but the result is LAN networking is MAC based while WAN networking is IP based.

It's one of those accidents of history that things are just the way they are and many don't question it. I run into it a lot describing basic networking concepts or early cisco material when people ask _why_ both MACs and IP addresses exist and its just... not always the correct time to explain those details to them.


How do we fix it?

How do you offer proprietary stateful services or applications without limiting the storage and management of personal data to a single machine?

I love being able to pick up my phone with the same browser tabs I was just looking at on my computer. I love being able to order lunch with the credit card I added to my virtual wallet on my cell phone. I also understand that developing features requires real-world input data.

This is a genuine question; what might the data structures, storage systems, and user experience look like in a modern company that lets users own their own data?


It's not a perfect solution, but I believe poisoning of the data whenever possible is a short term solution. Aggressive blocking in the browser and using ad guard/pi hole helps. But constantly feeding garbage whenever possible into your profile helps obfuscates when you inevitably slip up.

I've read people on here argue against using such extensions. There's the initial argument that it doesn't work, but the Google team banned it from the chrome store so it must have had some effect.

Then there is the argument that it helps fingerprint your browser into a unique user, which actually is only possible any more in Chrome, specifically not Firefox. If you're using Chrome already, it seems like a safe bet that every single website you go to is already being sent to Google anyway, so what does it matter.


I think that's absolutely a valid short term option, but I think ultimately that's legitimizing this cat and mouse game of companies mining citizens for personal information. We shouldn't have to feel cornered and preyed upon


I use this to sync a lot of things, even to my cell phone. And while it can be improved (I want my Logseq directory on my cell phone, but my source code folders are in the "same" folder and thus get synced along). I'd love a filtering feature on cell phone and generally a better cell phone interface.

https://syncthing.net/


That sounds like incredibly minor convenience at the expense of a lot of privacy. I'm not saying you're not entitled to your vote for the future but the shitville you're down with for the sake of not having to reopen a tab is sad.

I don't believe you want to fix it which is why youll always fixate on why it's hard and you like how easy shit is


That seems awfully harsh, and I'm not sure why you're being so cynical. I am interested in working towards a better future, but no matter how dumb you believe I am, comments like yours definitely won't lead there either. In an effort to continue the conversation constructively:

I understand the power of connected systems because I've worked with distributed computing systems for the better part of a decade. The more servers the better in my field, and the more situations they can compute in (my pocket, a volcano,space, etc.) the better. I like my computers connected, but I also like them under my control.

There's a reality to swallow; my grandmother doesn't want to configure a server, or understand what a certificate or even a yubikey is. A truly universal privacy and security management system has to do better to make privacy accessible.

Context is important; my health clinic knowing my cholesterol level: important. My credit card company knowing my cholesterol level? Unnecessary. It's going to be important to categorize personal information and provide controls on access.

What if my government adds a new type of issued ID? How does a company efficiently request access to my "swolshon_id" and provide rationale for it's use?

Is a company allowed to reject services if I choose not to provide a portion of my user data? Alternatively could some requirement be to require companies provide services that operate with limited access?


#1 Post-Snowden the general public has demonstrated they don't really care about privacy. As long as that is true, both companies and governments can demand a lot and get it, even though they don't need it.

Strictly from a computing standpoint (I can't address healthcare providers etc.), the root of many of these problems are at the consumer OS level and the incentives for the companies which build them.

We have three big problems: Google, Apple, and Microsoft. The incentives for these three are misaligned from having secure multi-device computing.

Amazon is as bad or worse, just look at the issues they've had with employees accessing Alexa audio recordings along with their security camera stuff. Fortunately their phone flopped.

Out of the big three, Google has been the worst offender here for the last 10 or so years. Apple has been pretty good, especially with actually security the hardware and stomping out 0days, but watch out: advertising is their growth business. Microsoft has a long history that isn't trustworthy.

Post-GPT3.5+, privacy matters a whole lot. The difference between people who will get completely p0wned and those that don't will be how much public and accessible data is out there. This will have a perverse feedback loop of companies demanding even more personal data and proprietary verification hardware.


I agree, it seems like there should be a traditional program on top that's filtering responses for known company secrets, conversations that go against published company guidelines, etc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: