Hacker Newsnew | past | comments | ask | show | jobs | submit | shincert's commentslogin

I've done the same recently. Haven't seen colors on my phone for a few weeks now.


Only one meal a day? What do you mean?


I fast all day and eat all my needs at night, a huge and delicious meal and dessert. There are a tons of benefits, in logistics/time, health and aesthetics. Google for intermittent fasting or "The Warrior Diet".


That sounds like it would create an insane glucose spike though.


That was exactly what I thought.

Plus I sleep better if I only eat a light meal in the evening. I tend to have a very light breakfast, decent meal at lunch time, light meal in the evening. I keep the sugar out and carbs low.

I guess different things work for different people.


> That sounds like it would create an insane glucose spike though.

Not what really happen though. When you eat less frequently, your body improves insuline usage and glucose levels stay steadily all day.


There is some effect at work here though, individual experiences may differ. If I eat one meal per day after that meal I will feel lazy and quite tired.

So if I finally ate at 6pm, hours later than 6pm I would feel like sleeping.

If I finally eat any later than 6pm I will start getting weak/shaky beyond 6pm.

This isn't a result of lack of acclimation. I've done fasting for 4+ month stretches multiple times in the past 6 years and have even experimented with 24+ hour fasts on a regular basis.


> If I eat one meal per day after that meal I will feel lazy and quite tired.

It takes some days/weeks (dependending on the individual) for your body to switch from carbs-only to carbs & fat burning so when you feel lazy and tired it is because your body is not properly burning fat to supply you with the energy required thus asks for food always that it depletes carbs of easy access.

> If I finally eat any later than 6pm I will start getting weak/shaky beyond 6pm.

Yup! I feel the same. I eat 8-9pm and around 11pm I sleep like a baby. Win-win.

> This isn't a result of lack of acclimation.

Now this is a point difficult to prove.


I've done keto for many months, so I'm quite used to the switch-over you describe.


> Assuming the RRs for the domains you are querying are signed, that's (IMO) probably all you need to do. While OCSP happens over plain-text HTTP, the responses are also signed so that you can verify them.

I will make sure that's the case. What I didn't explain yet is that I am doing this for a university project and I am fishing for extra points. So I was trying to justify running my own DNS server. Is it reasonable?

> I don't think there's much more you can really do (as the DNS queries/responses will travel over the Internet "in the clear" -- and, thus, subject to tampering/modification).

I really should have done more research on this, but I imagined I could encrypt the DNS queries themselves and forward them to a public recursive DNS server. Could I not use DNSCRYPT or DNS-over-TLS for this purpose?

> Also, an attacker could block your HTTP requests (for CRL downloads/OCSP queries). How does your application react when it doesn't get a response? "Fail open" or "fail closed"?

Assuming the server has at least downloaded an initial CRL, I could always fallback to that. I haven't played much with this yet, but I think that's the big advantage of a CRL versus an OCSP query, no?

I guess I should "fail closed" to cover all holes but then I'm basically letting the attacker DoS the server. What is best?


Sorry for the delayed response, I just noticed your reply...

> What I didn't explain yet is that I am doing this for a university project and I am fishing for extra points.

Okay. I was, for a few years, a "part-time" professor, so I'll respond with my professor hat on then...

> So I was trying to justify running my own DNS server. Is it reasonable?

That depends. Does running your own DNS server get you anything for this project? By that I mean, does it increase the security of your system any? If it does, then your professor may notice and/or appreciate the fact that you are "going the extra mile" to make it even more secure. It the overall security doesn't increase by adding a DNS server, it may be best to skip it. Additionally, if misconfiguration of the DNS server could decrease the security of the overall system, you may re-evaluate running your own.

> I really should have done more research on this, but I imagined I could encrypt the DNS queries themselves and forward them to a public recursive DNS server. Could I not use DNSCRYPT or DNS-over-TLS for this purpose?

Sure, you could route your queries through the public resolver. Again, what does get you? I'm assuming the goal is to protect against an attacker tampering with DNS responses, yes?

So, as we mentioned earlier, if DNSSEC is in place and all responses are signed, you're good. Nothing to worry about.

For now, however, let's assume that not all responses will be signed (I think this is a fairly safe assumption, for at least some public CAs; for your project, this may or may not be the case).

Without DNSSEC and even with DNSCRYPT or DNS-over-TLS to a public resolver, there is an opportunity for an attacker to tamper with DNS responses. Is tampering more likely to occur if you run your own DNS resolver or if you use a public one? Either way, the responses will be travelling over some portion of the Internet in the clear -- whether that is between you (if running your own server locally) and the root/authoritative nameservers or between the public resolver you're using and the root/authoritative nameservers. I'm not sure you can easily figure out the answer to that question.

If you use Google's public DNS service (8.8.8.8), for example, the DNS query/response will be "in the clear" (and subject to tampering) on the path between Google and the root/authoritative nameserver (depending on which one you're querying at the time). If you instead use a public DNSCRYPT server, the same will still be true. In between their server and the root/authoritative server, the query/response will still be in the clear. The only time this wouldn't be the case is when the root/authoritative nameservers themselves were available via DNSCRYPT/DNS-over-TLS.

Basically, unless everything is 100% DNSSEC, there's an opportunity for tampering. Fortunately, though, OCSP responses are signed, as we mentioned earlier. If the OCSP response comes with a valid signature, it can be trusted -- whether DNSSEC is in use or not. A response with an invalid signature (for whatever reason) obviously cannot be trusted. Let me ask you that first question again with this in mind: will running your own DNS server increase the security any? Will it increase the amount of trust you have in the validity of the certificates presented to you by clients?

> Assuming the server has at least downloaded an initial CRL, I could always fallback to that. I haven't played much with this yet, but I think that's the big advantage of a CRL versus an OCSP query, no?

Yes, and you probably should fallback on OCSP failure. One of the obvious advantages of CRLs (once they are downloaded) is that they can be cached. If the HTTP server goes down afterwards, the client can (and should!) use their cached version. Just make sure to refresh the CRL before it expires! CRLs (for public CAs) aren't really very feasible anymore simply because of their size. Some of them are 100 of MBs! Can you imagine having to download a 300 MB CRL on your mobile phone just because you visited some new web site for the first time!? This just isn't practical and is why we have OCSP.

(Another advantage of CRLs is that a client doesn't give away any information about what sites it is visiting because lookups are done locally (on the cached CRL). With OCSP, a passive listener can obtain the hostnames.)

> I guess I should "fail closed" to cover all holes but then I'm basically letting the attacker DoS the server. What is best?

That's another question with "that depends" as the answer.

All modern web browsers fail open (by default) if OCSP fails but is that acceptable for your project? What are the potential damages and/or losses if an attacker is able to prevent you from verifying a certificate is valid? You'll have to weigh the chances and risk of this happening against the security requirements of your project and determine whether fail open or fail closed is the right choice for you.

I hope this helps. Good luck!


> Basically, unless everything is 100% DNSSEC, there's an opportunity for tampering. Fortunately, though, OCSP responses are signed, as we mentioned earlier. If the OCSP response comes with a valid signature, it can be trusted -- whether DNSSEC is in use or not. A response with an invalid signature (for whatever reason) obviously cannot be trusted. Let me ask you that first question again with this in mind: will running your own DNS server increase the security any? Will it increase the amount of trust you have in the validity of the certificates presented to you by clients?

That was very enlightening, thank you for your help.

I can see it won't improve my trust on the OCSP responses or CRLs, but would it not be desirable to encrypt as much traffic (i.e. DNS queries) as I can in an effort to prohibit a potential attacker from employing traffic analysis techniques with the goal to learn something? This hypothetical scenario doesn't seem so farfetched to me.

Eventually, my encrypted DNS query will be forwarded to a DNS resolver that doesn't employ encryption and it will be in the clear, but how would the attacker know where to look next? If the original DNS query is encrypted, an attacker sniffing the network will not be able to know which DNS resolver the query was forwarded to, right? If so then he can't follow up on that and I've successfully disabled it from learning what the DNS query was about.

In short, I am not trying to increase my trust in the exchanges, but rather hide them as much as possible for the sake of obscurity. Is this reasonable?


Why? Is it really that bad of an analogy for an absolute beginner?


It's not terrible for an absolute beginner but it's fairly harmful overall. People tend to use this analogy a lot to conflate specific AI with general AI and argue for regulatory capture based on things completely outside of evidence. The real brain is sparsely connected and has multiple activation networks that reuse nodes. We also have the ability to train from single examples to things we've never seen before so it seems unlikely that our brain operates exclusively by derivatives on error or other data-fitting techniques. Humans still seem more unreasonably effective than deep learning on many tasks and this is despite having a harder problem (Humans have more tasks with unlabelled data as far as I can tell).


I think so. Primarily due to Djikstra's anti-anthropomorphic stance, which is very important here.

1. as the other poster to you noted, people are more apt to conflate "strong AI" with what we're actually doing with tensorflow, leading to very weird overreactions that aren't germane.

2. just as importantly, developers who believe this line of thinking are biased against a more correct understanding of their code, which makes debugging much more difficult and prevents advances in the underlying technology.

The implied abstraction ... is however beyond the computing scientist imbued with the operational approach that the anthropomorphic metaphor induces. In a very real and tragic sense he has a mental block: his anthropomorphic thinking erects an insurmountable barrier between him and the only effective way in which his work can be done well.


How is it much easier to maintain?


Because when everything is a class (even better, every selector is a single class like BEM strives for) overwriting rules is much easier because they all have the same specificity.

When you mix element selectors with class, ID, and multi-class (.foo.bar) selectors the specificity of each is different, and overwriting them means writing needlessly complicated selectors that are then in turn harder to maintain.

Bootstrap 4 goes as far as eliminating most sibling/child combinators (>, +, etc) because they add specificity. Anyone that's tried to write custom classes for list elements in Bootstrap 3 (.list-inline>li) has experienced this.

http://v4-alpha.getbootstrap.com/migration/#navs


Why is not using flexbox a bad thing? It recommends the new CSS Grid instead.


It's too soon to rely on CSS Grid. A lot of people don't (and sometimes can't) use a browser that's less than four months old. Can I Use estimates ~44% of users don't use a browser that supports the current syntax [0]. IE11 will never support the current CSS Grid syntax but does support Flexbox (with some bugs). Edge will support the current CSS Grid syntax but doesn't yet. Older phones and tablets won't support CSS Grid.

[0] http://caniuse.com/#feat=css-grid


It's soon depending on your requirements, but Shoelace is forward thinking and will only become more useful and relevant as CSS Grid adoption rises. As for Edge, it's 100% as of 16.


If you're developing sites for circumstances where you can control what browser clients use, fine. Very few sites developing in that situation.

Edge 16 hasn't been released yet.

I look forward to CSS Grid being widely available and in the meantime degrading gracefully in non-supporting browsers is often viable. My concern is for the users of sites developed by the many people who do not give any thought to how their site performs using anything other than what they personally use.


I was under the impression this project also depends on CSS variables.

If your target browser can't support CSS Grid it's probably too old to support CSS Variables (CSSS custom properties) anyway.


Chrome, Firefox, and Safari added support for the current CSS Grid syntax in March 2017. CSS Variable support is older. Chrome and Safari have supported CSS Variables since March 2016, Firefox since July 2014. Edge only added support (with bugs) this year and of course IE11 will never support them. Older devices that can't upgrade to iOS 10 or later, like the iPhone 4S, can use CSS Variables.

http://caniuse.com/#feat=css-variables


For those less-than-brand-new browsers, there's at least one polyfill (https://github.com/FremyCompany/css-grid-polyfill).

Not ideal, sure (especially if you're JavaScript-averse, like I am), but it's a start.


Irrelevant for a framework that wants to be future-proof and bleeding edge.


Anything widely implemented by browsers is future-proof. Being on the bleeding edge guarantees failure for lots of users.


Relevant, however for a framework to be used in production


https://github.com/claviska/shoelace-css/pull/10

This PR will reduce the size down to 18KB.


You've just told the whole world. Not so hard anymore.


Crap.


Nice! Subscribed


How does it work?


Its super simple 11 lines of code then lines for the topics and description.

I have a topics hash/dictionary.

If the helpme command is run without a topic argument, it prints out all the topic keys to the dictionary one per line.

If it is run with a topic argument, it just prints out the value for that topic.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: