What I want for all these services (Little Snitch, ESET, etc) is an EasyList-like ... list. A community-aggregated and reviewed list of servers that don't merit my connection. I'd pay a monthly subscription fee for that.
I'd also like separate lists for
* "this wifi is public, be extra cautious"
* "this wifi is public, be nice and don't torrent, do backups, etc"
* "I'm on a metered connection (e.g. LTE), don't run torrents, backups, etc"
edit: for anyone looking for a monetizable idea: this post has 41, no 42, no 43 points in about an hour. Probably a good idea...
My solution right now (on macOS) is Gas Mask[1] (a menubar hosts file manager) combined with some very nice hosts files[2]. It certainly kills of most of the pop-ups I run into.
This is the way. No need to individually configure all your devices. My DHCP will hand out a local DNS to each client that maps all the ad/malware domains to 0.0.0.0, so if you’re on my network, even as a guest, you get blocking for free.
I've always been hesitant to use DNS to block ads because it's difficult to turn off for non techies. Did the pi hole cause any issues in your experience?
Vanishingly few. Occasionally, I'm browsing the web and hit a text link that takes me to the browser's "I can't find this site" default screen. This usually happens with sponsored links that are not served from ad networks yet link to known ad sites.
My wife frequently complains about sponsored Google searches not resolving. She doesn't want to use an in-browser adblock, so the links will still appear, but aren't usable. Also, many redirecting analytics services from emails get blocked.
Personally, I don't find these to be breaking issues for my use. My only issue is that the PiHole interface's administrative features are authenticated via the PiHole's service user account password. This is the Ubuntu user password for the user the service runs under when installed on Raspbian or whatever. There's no secondary credential store. There isn't even a list of users. To log in, you enter the user's password. If there was a way to assign credentials to network users and allow them to whitelist/blacklist entries and audit that, it could easily be much more non-technical user friendly.
One final half complaint. If a link is direct to a blocked site that is served over ssl, you won't get the nice "This site has been blocked" page. It will just show the standard Chrome/Safari/Firefox "could not connect" error. As a technical user, this is normal and makes sense. For others, it makes "the internet" appear "broken". Obviously this isn't something a PiHole can fix on it's own, and I don't expect it to. It's a slippery slope to add a trusted root or intermediary cert to each of my network devices and allow a random box on my network to dynamically "poison" my DNS and serve fraudulent dynamically generated site certificates just to show me an informational page to allowing a random box on my network proxy and DPI my SSL traffic. It's not something I'm comfortable with maintaining.
All good points. I avoid some of the headache by not using the actual PiHole software, and therefore not bringing along whatever credential baggage that comes with. Just dnsmasq, and cron to update the blocklist. My setup runs directly on my router as well, eliminating the need to maintain another box.
Are you using a Ubiquiti router for this by any chance? Would love to hear more details as I have been thinking about implementing something like this on an EdgeRouter Lite.
I run pihole in a docker container on linux, so the password thing isn't a problem.
Also to people redirecting ad servers to 0.0.0.0, that can cause page damage particularly with things like iframes. Pi-hole instead redirects them to its own webserver and serves up 1x1 pixel transparent images to avoid this.
Plus, you get free ad blocking for most of the native apps on your mobile devices when using wifi at home or outdoors with VPN (haven't tested the latter yet).
Why is your pi-hole exposed to the internet? That's not a great idea. You could have other people using your DNS service also.
It's true it's security through obscurity and won't slow down a spear fisher, but I always change the SSH port to something like 22022 when I have to expose it to the internet and find this eliminates almost all of the portscanning/doorknob rattling. Same thing with wordpress, changing the /wp-admin directory is immensely helpful.
Could have been a number of things - most likely culprit I've seen is that it was left exposed to the internet, ssh most likely, and the default password wasn't changed.
With LS afaik it's not possible to run multiple profiles at the same time (say you have groups of rules for a particular app), otherwise it would be perfect.
I would pay for such a service as well. In addition to that, I would love if this service would allow companies like Apple and Google to maintains their own lists of IP's and update them regularly, so you can be 100% sure that an IP belongs to them.
Not entirely. From what I understand, these outbound firewalls are working at the kernel level and interject themselves into a network connection outside of the DNS lookup process. You could reverse-dns lookup the IP, which Little Snitch tries, but with things like CDNs and AWS EC2, you end up with a lot of reports of applications trying to connect to "foo.akamai.com" OR bar.akamai.com", where foo and bar are entirely separate entities, or just simply to ec2-0.1.2.3.aws.amazonaws.com or what-have-you. Little Snitch appears to maintain it's own cache of DNS entries as well, so if you've got one application that connects to some CDN's IP via it's own CNAME, many times other applications will appear to be connecting to the first application's CNAME when they attempt to connect to the same IP because LS has resolved that IP to the first CNAME more times, or first, or something like that.
It's not perfect, and frequently it isn't even helpful.
I know it's not the same thing, but I use TripMode on macOS for your last two points, which selectively blocks app access based on what network you're connected to.
I also love TripMode. Uses a kernel extension which some users may dislike but so nice to turn off Dropbox/Google Drive/Arq/iClouds backup services while tethering at Panera
So would you want just the curated list, or an application that uses said list and provides feedback to the user? Either? Both?
Because I've _sort_ of done this on my personal Mac-though for reasons more pertaining to an infrastructure I maintain for a client, scaling this to a type of service wouldn't be TOO difficult.
I want to drop that list into little snitch. In fact, when I install Little Snitch, I want it to ask me "would you like to subscribe to iamdave's list?"
Like how a number of different ad blockers subscribe to the EasyList ad blocking list, correct? https://easylist.to/
“
The links listed below allow you to select filter lists for use in your browser provided that you are using a compatible ad blocker (tested with Adblock Plus, AdBlock and uBlock Origin). Furthermore, EasyPrivacy Tracking Protection List is available for Internet Explorer 9 and higher.
”
There used to be an application used by torrenters called PeerGuardian and then PeerBlock, which was basically just that. It was a curated list of IPs that were known to belong to domains at universities, companies, governments, etc. It probably had malicious domains in there as well.
An MVP for this would be quite simple to build. Do you happen to use Little Snitch? AFAIK, they don't really have an import/export option (their "backup" option is tied to the user account), but I've found a way around this limitation that is only a tad little more tedious than an auto-workflow.
or maybe the old skool option of paid upgrades? $10 first time fee to get the current list. then, as you start to notice things not getting blocked, purchase the upgrade/updated list?
I guess it depends on the monthly fee? $10/month, nope. $12/year-$1/monthly or $10/year-1payment to get monthly updates, quite possibly. i do understand updating/maintaining a fresh list will cost someone somewhere money.
And extend the idea with community-aggregated whitlists. If i wand to use a software like spotify it for sure hat to load the songs somehow from their servers, so the application-intended ntwork community should be whitelisted.
I do this on my home network. It works well, so I encourage you to build this out.
As an added bonus As a service you could point the dns entries to your own web server and serve up cat pictures or motivational pictures in place of ads.
The solutions I use were already "built" before this one existed. I was using djbdns to block ads before there were adblockers.
I think its great that more users, through DNS-based ad blocking projects, may see how controlling their own DNS is useful, perhaps in ways they might not have imagined.
However the last time I looked at it, I recall this project was defaulting to using open resolvers run by third parties, e.g. Google. Maybe I am remembering incorrectly since so many projects like to use these third party resolvers.
In any event, that is not how my solutions work. A third party with such delegated (ultimate) authority from the user is not part of the solutions I designed for myself.
Also, I never used dnsmasq as part of any solution. I have a strong bias against it for a number of reasons. If I recall correctly, pi-hole relies on dnsmasq.
What if I let the user run it locally? I point a local dnscache at a local, customised "root.zone" that blocks all these EasyList ad server domains? User could have several alternate root.zones that provide different "profiles". To switch profiles simply switch root.zones.
(I used to do this for myself. Then I stopped using caches altogether. Now I do everything with tinydns, cdb and a customized stub resolver.)
Or what if I resolve all the ad server domains in the EasyList each day from various checkpoints around the world and publish an IP blocklist? Then users can import it into their application level firewalls.
Even without using authoritative DNS, if we only have a blocklist of IP addresses and some application-level firewall solution, we can examine outgoing HTTP headers in a client-side proxy and filter accordingly.
I also do not use a popular browser that runs Javascript to send and retrieve to and from the internet. That is the root cause of most users problems. This is the most effective solution, bar none. The third parties users want to avoid are almost almost always depending on Javascript to accomplish their goals.
Connecting a powerful interpreter with potentially full control over the users computer to the open internet. Then believing this can be safe.
The user is granting use of this interpreter to third parties. In this thread we can see how users struggle to know which third parties can be trusted. All for the sake of keeping that interpreter open to "good" third parties to access at will over the internet. (Why is a good question.)
Early web browsers called on other, separate programs to do specific jobs outside of rendering HTML. Taking a cue from that history, I use simpler, limited programs with no built-in interpreter to do two specific jobs: sending and retrieving.
Third parties can return code in response to requests for content, but I am under no obligation to run the code, let alone run it from a popular browser with a powerful interpreter that is connected to the internet.
Cannot speak for others, but this approach has worked well for me as the www worsens.
But yeah, current situation is getting ridiculous. One could always expect malicious actors, but I didn't guess people will be routinely putting in the cloud things that have no business being Internet-dependent. Then again, I remember first reading pg's essays, in which he praised the benefits of running things on your server and delivering them as a webpage, instead of standalone software. I nodded along, and it only occurred to me many years later how incredibly user-hostile that is, too.
Give it another 25, and you will have to pay a premium for things which are stand-alone, disconnected from the net. Want a car which is not navigating using cloud AI? Only the rich can afford that...
I really have no problem with sharing my information openly in cases my presence has an obvious effect on those around me. Letting other cars within a certain radius know where I am seems a reasonable sharing of my information. I would go as far as sharing my intended destination so that some "cloud" some where can better plot my route to minimize traffic for me and others. That said, storing all that information for later analysis, which is possible given what I've shared, would be an unreasonable use of my data (unless it's done as part of some aggregate information). The problem of course is that once I share that information I lose control of how it is used later. If someone can devise a way for me to share my information while controlling how that information can be used later, it would go a long way to striking the right balance. I guess that's more or less "personal DRM" for our information.
I think the difference is consensual sharing of the information. Many would probably be OK with certain types of information collection, but the way some things are collected without overt notice or consent (Example: Wi-Fi SSID to GPS long/lat pairings on phones) is understandably concerning to some.
> If someone can devise a way for me to share my information while controlling how that information can be used later, it would go a long way to striking the right balance. I guess that's more or less "personal DRM" for our information.
It would not be easy to do this in a technical manner, as it could be defeated (as with most DRM). It sounds like a legislative solution would be best.
> Want a car which is not navigating using cloud AI? Only the rich can afford that...
Good. I sure hope only a small fraction of the population will be able to manually drive their car in the future. It would save lives, time, and money for everyone if the bulk of the idiots were unable to manually drive their car.
> It would save lives, time, and money for everyone
I'll give you "lives" and "money," but not necessarily time.
I live in a place where self-driving vehicles can be spotted fairly regularly. Once a week or so. You can tell by the special license plates. They are always very ponderous, careful drivers. It's fascinating to see them gently slow to a stop for a red light, then take off like a jackrabbit when it turns green.
Perhaps as the technology matures, they'll start to keep pace with traffic better.
I'm actually looking forward to self-driving cars. It's all the personal time benefits of mass transit (book reading, meditating, general mental health), without worrying about accidentally sitting in someone else's pee.
I am pretty sure that is because they have to account for all the non-self-driving cars. If all cars were self driving, they could co-ordinate and go a lot faster.
I think it will. The above poster's reference to how 'jaywalking' became a crime after motor vehicles associations conducted heavy PR campaigns to banish pedestrians from what once were shared streets is instructive.
I interpreted the alternative to that being locally run AI, which is not implicitly spying on you and capable of doing nefarious things (e.g. not allowing you to navigate until you pay this month’s upgrade fee).
I assumed the same, too. A self-driving car should be able to navigate itself without an Internet connection; any permanent ties to the cloud are just anticonsumer business strategy.
Hands-free AI navigation is still worth it to not totally disable though. Now paying to not have all your personal driving data, preferences, and in-car conversations uploaded to the cloud is another story..
The world is going to split into “cheap AI” that runs in the cloud and is free or heavily subsidized by selling your data and giving you biased functionality toward the AI provider and “expensive AI” that runs completely locally and has no or limited outbound connectivity and is solely biased toward the user desires. Hopefully this comes in open source as it will be hard to write and expensive to run on local hardware.
Our product can run completely locally and do a good job. Allow connectivity to other devices in the building and performance improves. Extend connectivity to our cloud servers and it gets a little better still. You can always revert to lower levels (and an Internet outage would simulate that, for example).
We regard your data as yours unless you want to share it for a specific reason.
me :)
I remember in 2000 when I first installed ZoneAlarm as a trial I noticed that Word wanted to connect to something, and I was thinking "why would Word want to escape my PC?
Later I switched to McAfee Corporate Firewall (in which I could even allow/block IPs/ports).
Now I am proudly using WFC that someone mentioned in another post earlier.
I've been saying for a long time that one thing that companies can do to meaningfully increase their security is to NOT install default routes on most machines.
Put in routes for your local networks and applications, set up a proxy server for any legitimate traffic that needs to "exit" the network (i.e., go to the Internet), and simply drop anything else.
it was thought of 25 years ago , I remember using a tool that monitored all outgoing traffic and I needed to approve any traffic that was not already authorized.
For those on Windows, http://www.sphinx-soft.com/Vista/index.html does the same using the native firewall (so no 3rd party dependencies, services, or bloat) (though they've ~recently added paid licenses with more features to their basic offering).
I only wish it were cleaner and simpler. I don't think the Windows Firewall API is too bad, I should add this to my bucket list of open source software to write that I'll maybe get around to in the next 20 years....
> Simple tool to configure Windows Filtering Platform (WFP)
So much better than anything else I tested. Easy to import/export rules (XML) and there's also portable mode and advanced options (that goes beyond the simple UI you can see in the image).
Thanks for the recommendation. Looks like a really good app. It recommends to disable the Windows firewall - which is understandable. But, then you start getting pestered by Windows to turn it back on. Do you turn off the Windows firewall or have them both turned on?
When the native firewall in Windows blocks something, doesn't the connection attempt fail immediately?
For example while the Little Snitch popup dialog is waiting for user input the affected application just sees an unusual latency spike and it will not complain immediately that internet access in not available. Afaik, this is not the case with the Windows Firewall: The connection will fail for the application while the frontend-app is still waiting for the user's decision.
I believe by default incoming connections are "on hold" until they either timeout or the dialog is confirmed one way or the other. Note that most "real" firewalls have two different options for blocking: drop or reject. So long as the packet isn't rejected, drop and "drop unless this dialog is satisfied" aren't very different.
I'm not sure how this app gets around it (if it does at all).
it is because one of the major use cases for an outgoing firewall is when installing new software, which is where you also want to be careful what the application connects to, which does not work very well at all compared to Little Snitch
My wife has an old fitbit like app that tries to connect to China every 2 seconds. Papertrail is really useful to see patterns. I blocked outbound to every single country except for the country I live in (the embattled country of Binomo). Once I allowed the US and EU, it has been interesting to learn if a site is not US or EU.
These are the domains this week that were blocked leading me to allow a few more countries.
TinyWall... other LittleSnitch-like apps on windows aren't anywhere near as nice as LittleSnitch, but TinyWall gets rid of the crappy prompts firing a million times a day until you configure it (the thing that sucks is apps like ChromeUpdate and others can get around the firewall somehow, so if you're set to prompt, it will prompt you all day long, so TinyWall makes it all sane).
Unfortunately Tinywall hasn't been updated since 2016. It still works, of course, but I'm not comfortable using security software that isn't in active development.
Thanks for the link. I'm curious how it compares to https://www.binisoft.org/wfc.php although this looks to be free I don't think it's open source. Not having much luck finding license info for it.
I have not used Simplewall but I am a paid customer of Binisoft WFC and do recommend it. WFC works great and is frequently upgraded, the developer is very responsive to his users.
It does have a crude ugly UI, so just don't expect it to look like Little Snitch (which I also endorse on MacOS) or Glasswire.
As you can see in the feature comparison[1], the free edition doesn't cover Windows' system applications, it doesn't offer a lot of pre-defined rule sets, or Desktop integration. Binsoft's WFC does.
However, if you're willing to spend money, Sphinx's product seems to offer quite a bit more features, but then you might as well consider other products out there.
Also, Binsoft's WFC UI, especially that of the rules editor, slowed down for me pretty heavily with just a hundred rules.
I'm not sure how Sphinx W10FC handles it but the Binsoft WFC doesn't accept not pre-defined file types. So if you have a binary with a random suffix, like anti-cheating rootkits often do, you can only add a generic allow or deny rule. You can't configure the rule.
Glasswire 1.x was quite a nifty firewall/network monitoring tool and you could easily see all the dozens of outbound connections Microsoft launches from Windows 10.
Unfortunately, version 2.0 is no longer free. Windows 10 Firewall Control looks pretty good functionality wise, but it would be better if it had Glasswire's interface or one that's even better.
Looks promising. I used to use Little Snitch, but last year they decided to charge for the new version, and I uninstalled it.
Little Snitch was effective, but overly complex for the average user. I'm sure it's great for someone who configures networks on a regular basis, but as a Mac user, I just want to use my Mac. If I wanted to twiddle with security settings all day long, I'd still be on Windows.
This looks like it might be a good, simple, replacement. Hopefully as it evolves it doesn't get swamped by feature bloat.
That comment makes me chuckle. These days, I have close to zero faith in commercial software that is "free", assuming that the business model is selling my data.
I happily paid for Little Snitch and was comforted by the fact that I was the customer.
True. But in practice, a software company that's making money from users has a lot of incentive not to harm their brand, or open themselves up to competition, by being shady.
It does happen but users are waking up to the problem and companies are learning.
Outcomes from such clauses also hinge on whether metadata describing the operation of their system - such as logs showing who communicated with who - is your data, or their data.
Following recent changes to law here in Australia, for example, metadata is essentially the property of the state.
The new major version offers a lot more functionality. I looked into it and decided I wanted it, so I upgraded. I assume that I could have stayed with the old major version but I'm not sure.
Indeed. I'm especially suspicious of "security" software. Releasing free high quality security seems to be a popular attack vector for advertisers, spammers, hackers, and nation states.
Now we just need to figure out who pays for those upgrades. I think we could try a scheme where we keep growing the user base so more recent customers pay for the prior customers' upgrades.
Considering Apple's constant shifting of goalposts with macOS, what counts as "significant" in your book, even disregarding user-facing features? And how significant is the ~$50 they want for it?
I have to roll my eyes at someone who scoffs at a $25 upgrade every few years.
If they're not cool with funding any future development on the product, then they must be cool with not upgrading. But of course they instead entitle themselves to all of your future labor because they once threw some shekels your way.
you only need to pay every 3-5 years aaaaand only if you want to upgrade, aaaaaaand you can keep using your last updated version, aaaand only 50% of the full price
Unfortunately, this still has the key flaw that has plagued outbound firewalls since their invention:
"Currently, LuLu only supports rules at the 'process level', meaning a process (or application) is either allowed to connect to the network or not. As is the case with other firewalls, this also means that if a legitimate (allowed) process is abused by malicious code to perform network actions, this will be allowed."
In other words, it won't stop malicious Javascript running in your browser from making an outbound connection, which is the most common way for malware to do that.
It does say "currently", but I'm not sure how you would get around this flaw; at any rate, nobody has yet figured out how.
Combining process (source) and destination rule combos, the Little Snitch could be customized to "solve" this issue. Process A is allowed to talk to domains X, Y and Z.
"Solve" not solve because, for me, setting up baseline rule sets was too intrusive to my workflow.
> Combining process (source) and destination rule combos, the Little Snitch could be customized to "solve" this issue. Process A is allowed to talk to domains X, Y and Z.
Ok, and what happens when I want to browse to a different site?
>for me, setting up baseline rule sets was too intrusive to my workflow
It seems like that would be true for anyone that wants to use their browser to go to more than a small number of websites.
> In other words, it won't stop malicious Javascript running in your browser from making an outbound connection, which is the most common way for malware to do that.
This might be possible, if you start off with deny-all as the default and then start manually adding exceptions as you browse.
I would like to see internet access treated as an OS permission that need to be expressly granted by the user, same goes for iOS and Android. I wish this was part of the OS and not something I need to go and install 3rd party apps for. I like the idea of deny all by default.
> I would like to see internet access treated as an OS permission.
That would be nice, but it wouldn't fix the problem I've been talking about, because you would have to give your browser the internet access permission, and the OS has no way of knowing which of the connections your browser is making are legitimate and which are not. Only you know that, which means you would have to continually be interrupting your browsing to approve or disapprove connections.
Let me ask, seriously: if we take the Great Firewall of China, it does all sorts of packet inspection. Why can't this be applied to personal firewalls and inspect the traffic leaving (or coming in) for malicious content being masked as allowed traffic, etc?
There was a company called Packeteer that did traffic shaping/inspection....could any concepts be applied to firewalling as they were to traffic prioritization?
and how does one verify the new exception request is trustworthy. it's enough to drive one mad the whole cat/mouse game of trust/deny. the only winning move is not to play.
And of course, anything local to the machine can only be trusted as long as you are willing to accept that the kernel is not compromised, because it's pretty trivial for a rootkit that is running in the kernel’s context to conceal files, sockets, or even create unreported network interfaces. I remember that Greg Hoglund's rootkit.com contained several first crude (and not so crude) implementations that could do this kind of things (FU springs to mind?) way back in the mid 2000s or thereabouts.
The answer to that, of course, is that if you are really serious about firewalling, said firewall must be a separate device.
also, there's nothing preventing a program from debugging a trusted process, and getting that process to perform the requests for the malicious program.
1. The OSI tried to get a trademark for “open source” in their early days and failed.[1] They don’t own the term, and arguing fine distinctions like this does nothing but promote flame wars.[2]
2. The developer put a lot of effort into this and was generous enough to make this available for free with the source code open. Please be gracious, because belligerent feedback like this is what causes people to sometimes reconsider making software free or open source.[3]
3. You also falsely claim Patrick Wardle is aware of the issue and refuses to change it, even though he hasn’t commented on the issue you cited, at least as I write this.
1. Whether or not the OSI has a trademark doesn't seem relevant to me, they coined a term which wasn't used before and associated it with a well known definition. The distinction seems more significant than fine to me and causes confusion about what kind of license I (and others, as evidenced by the open issue on the subject) expect the software to be under. That a subject is source of disagreement is certainly not a valid reason not to discuss it.
2. I'm very aware of the efforts of free or open source software authors and I'm grateful for them. In fact, I occasionally take time to thank them and make donations to them (although I should do it more). This doesn't mean that inaccurate statements should not be corrected and I don't see anything `belligerent` about reporting them, as would be the case for reporting a bug.
3. You're right on this, I wrongfully assumed one of the person answering in the issue was the author. I changed my comment.
People keep claiming that this is a somehow established definition of “open source” (or Open Source or OpEnSoUrCe), but when I look at the people around me it seems a vast (silent) majority actually uses the term that way. “Open source”, in my personal “tests”, means “the source is open.” You may balk at this, but notice the elegance of the definition corresponding to the meaning of the actual words. Free Software is more commonly understood as what people sometimes religiously defend as the meaning of “open source”.
Then someone comes along wagging his finger online saying no, no, no, that’s not what that word means. Just like with “could care less”. You know what? I think the battle is lost. Language doesn’t work that way, and I’m going to call it: Open Source means open source, i.e. source is open. And we would do ourselves a great favour switching to a different term, because we’re swimming upstream in this one.
While we’re on the subject of poor names: the worst mistake Stallman made was calling his movement Free Software. He would have turned a strong undercurrent of PR in his favour [e: by calling it Freedom Software] instead of having to constantly battle the semantics of “Free as in Freedom of Speech, not Free as in Free Beer.”
When people say naming things is one of the two hardest things in programming, I’m wondering if it’s just because programmers are really bad at it.
You can certainly change the meaning of well understood words in English by capitalizing them: polish != Polish, windows != Windows, etc. Furthermore, adding a hyphen to convert a noun phrase to an adjective seems to me to be a common and non-controversial practice in English.
(Yes, I believe I understand your position, but I think you're going to have to rephrase your objection.)
Not right. The problem with this definition is that it doesn't confer any extra rights. It doesn't mean very much if you can read the source code if you're not allowed to use it, study it, modify it or release modified copies. Microsoft does this, actually. They release the source code of the C runtime library, but it's All Rights Reserved, so you can't really do anything with it except use it for debugging. You have no extra rights to it than if they didn't release it and you reverse-engineered it from the CRT binaries instead. Even your right to study it is in question. You can't contribute to the Wine CRT if you've seen the official CRT source.[1]
So, for a program to be "open source," under the commonly understood definition, it must confer some rights. Most organizations that deal in free and open source software, like the OSI, FSF, and Debian, have agreed that this includes the right to use the program for any purpose, including commercial purposes.
Non-commercial use clauses for software are really troublesome, too. For example, if a small family-owned business uses the same computer for personal and business work, are they allowed to use LuLu at all? If another Objective C developer is reading LuLu's source code and they come across a utility or widget or something that they want to use in their own software, can they use it without the troublesome non-commercial use restriction coming with it? (Probably not.)
No, I'd say that your "open source" is either "shared source" or "source available" - depending on exact terms - using terms that we've been using for a couple of decades now.
I'm not personally a mac user, but I'm still very glad to see projects like this being developed as open source. Very cool I hope this goes on to be a really solid piece of software.
Does anybody have any recommendations for good ways to get fine-tuned control of Windows' default firewall?
The install page says that `sudo configure.sh -install` is the install command. The command is actually `sudo ./configure.sh -install`. Further, it should probably be `sudo ./configure.sh --install` (with two hyphens), as is convention for named (edit: long-form) options on the command line.
> as is convention for named options on the command line.
Gosh, I really wish that people would follow a convention for named options on the command line. I don't even really care which one, as long as they were all consistent in picking one.
The usual convention is a single hyphen for short-form (single-letter) options, and a double hyphen for long-form options:
> python -v
or
> python —-version
It’s good practice to offer both. It should also be possible to set multiple options at once by appending one after another in short form following a single hyphen:
> ls -alR
is the same as
> ls -a -l -R
Long-form options are technically a GNU thing [0] and are not mentioned in the POSIX standard, but they’re conventional enough now that I think it’s good practice to include them in any CLI program.
There are also a number of looser conventions about the meaning of certain short-form options [1].
It's good to see another option for an outbound firewall, but as an industry we still have a long way to go. As with many security solutions, there is a conflict between flexibility and usability. I want:
1) To be able to choose the exact host/subnet/domain that an application can access with a good UX
2) Have someone else curate a list that I subscribe to that handles most cases
3) Work on desktop and mobile
For choosing the exact host/subnet/domain on a per-application basis, the best UX I've seen on any platform is FirewallIP[1], the unmaintained software on a jailbroken iPhone. So many desktop solutions[2] only let you choose Allow everything or Deny everything, Little Snitch and Windows 10 Firewall Control[3] are exceptions, but even they are limited.
The curated list option should be easy enough to support on most platforms. Easylist has shown how well it can work on the browser when combined with uBlock Origin. Install it for someone who is technically naive and they'll just see no ads with no negative experience.
The mobile platform is harder to support as under Android you need to root the phone to get access to the underlying iptables firewall with something like Afwall+, or you run a fake VPN back to the device and filter there which is prone to failure (is it working? has it stopped itself for some reason) and has less flexibility. Under unjailbroken IOS, products like Surge, Potatso2 and Shadowrocket run a local proxy that is similar to the fake VPN under Android, but requires manually editing a text file for configuration and seem to be designed to get around the Chinese internet restrictions rather than privacy.
Breaks networking on High Sierra.
No Browser works anymore.
curl stops working.
git doesn't even trigger its asking window.
Power usage doubles when networking is used too.
> We recommend against using Creative Commons licenses for software. Instead, we strongly encourage you to use one of the very good software licenses which are already available. We recommend considering licenses made available by the Free Software Foundation or listed as “open source” by the Open Source Initiative.
That's because we treat software very differently from most other content subject to copyright.
As in this case, (reading the above threads) there's confusion as to the no commercial use clause extends to the content or the outcome of its processes. That is to say, NoCommercialUse for a book clearly means for derivative works. Nobody would ever suggest you can't read a book while in a commercial establishment. But in software we routinely place use restrictions on the end-user. Kind of bizarre, when you think about it.
I completely agree with your first sentence. But I think your interpretation of NonCommercial is a bit off. NonCommercial in the context of a book does not refer to "using" the book or to creating derivatives. You don't need a license to read a book. Rather, it refers to copying the book. They have a separate clause that refers to creating derivative works from the book. If you have a CC-BY-NC book, that means you're allowed to copy the book as much as you want as long as it's not for commercial purposes. If you have a CC-BY book, that means you can copy it as much as you want, even if it's for commercial purposes. If you have CC-BY-ND, that means even though you can copy the book as much as you want, even for commercial purposes, the author is not granting you the right to make derivatives.
Software is different because copying software is a necessary part of using it. So CC-BY-NC for software could quite reasonably be read to restrict its use in a commercial environment because you (notionally) need a license to make that copy from the internet to your hard drive, and from your hard drive to system RAM so that you can use it.
You're distinguishing more finely than I am between exact copies and modified copies. Fair enough. My use of "derivative" above is intended to encompass deriving copies from an original, with or without modification.
To the extent using software inherently means creating copies - so does reading. The image of the page is transferred to my retinas and encoded in the volatile storage of an organic neural network.
(I'm making the same distinction between exact and modified copies that the Creative Commons folks make...)
As to your second point... Ha! Fair enough. But IIRC case law has actually recognized that the copies created on a computer as you install and execute a program count as "copies" for the purpose of needing a license for an activity that would otherwise violate copyright. That is why EULAs are, to some extent, considered valid and enforceable. No such case has been made for your retinas encoding the light bouncing off a page and transferring that pattern to your neurons.
What's the CPU usage? I tried Little Snitch, but it was often consuming insane amounts of CPU (40%+) which matters a lot on a 12' Macbook on battery, so I uninstalled it.
Yeah, I though so too. I even tried a complete reinstall, but that didn't improve the situation.
It's probably due to the absurd amounts of logging it does (every single connection tracked on a world map), which I didn't find a way to disable... I probably have an abnormal number of connections too due to torrenting (only Linux distros obviously). The Macbook CPU isn't high performance either.
The author is not subtle in letting know that this is intended to be open source replacement for Little Snitch (domain!).
But at-least macOS has little snitch, closest for Linux was opensnitch which was announced on HN few months back -
https://github.com/evilsocket/opensnitch/ but I'm not sure whether it's actively being developed though.
No sadly Opensnitch is dead.
Evilsocket for whatever reason went back to OSX and I (who was the other large contributor) did not feel the motivation to work on the project anymore.
Second, is the business model of Objective-See to offer open source alternatives for Objective Development's products (LuLu instead of Little Snitch; OverSight instead of Micro Snitch)?
which hung and didn't show any sort of prompt from LuLu. I imagine (thought I haven't used it) that Little Snitch would prompt to allow/deny git from connecting to the network.
So, it doesn't seem very functional, but it is an alpha, so that is likely expected.
> Do I need LuLu if I've turned on the built-in macOS firewall?
> Yes! Apple's built-in firewall only blocks incoming connections. LuLu is designed to detect and block outgoing connections, such as those generated by malware when the malware attempts to connect to it's command & control server for tasking, or exfiltrates data.
If you plan on doing same thing in windows be aware you need to disable Dnscache service. Its impossible in windows to screen loopback network interface, means you cant filter which programs get DNS access while "DNS Client" is running, its all or nothing. DNS is a very popular covert exfiltration channel.
This project looks awesome. I just looked at the code and it looks like every line of code has a comment. It seems like a bit of overkill in Obj-C being such a verbose language. Aside from that, I'm definitely going to check this out.
LuLu is a billion dollar hypermarket chain. I think it would be a good idea to rename this project in the beginning if you don't want to get into any copyright issues.
Many countries have a “well-known trademark” doctrine where a mark can be so famous that any business using it could be a source of consumer confusion. For example, if you see that Coca-Cola has released a firewall, you might well think it has some connection to Coca-Cola, even if you know they are not currently in the software business. Lulu supermarket may not be well known enough to enjoy that kind of protection.
Sometimes they can be the same business in different markets and use the same trademark.
Think about how many cities have "Great Wall" Chinese restaurants.
Most people only know trademarks at a national level. In the United States, at least, each state can grant its own trademarks. I've done it in Illinois ($50/10 years), and Texas ($10/10 years).
German hypermarket 'Metro' sued Microsoft over naming because they registered their name for just about any business sector , including computers and data processing. https://arstechnica.com/information-technology/2012/08/micro... They sued others, including public transport companies, and a disco place before. I believe LuLu is fine though, it's Metro AG who are far overreacting.
I'd also like separate lists for
* "this wifi is public, be extra cautious"
* "this wifi is public, be nice and don't torrent, do backups, etc"
* "I'm on a metered connection (e.g. LTE), don't run torrents, backups, etc"
edit: for anyone looking for a monetizable idea: this post has 41, no 42, no 43 points in about an hour. Probably a good idea...