Hacker Newsnew | past | comments | ask | show | jobs | submit | groundthrower's commentslogin

I have some questions on this.

Living on west coast of Sweden, our family went to Stockholm this summer and as the temperature rose, we asked were to go and swim and basically no one knew and we didn’t see people doing it either.


That's rather weird, it's one of the most popular summer activities in Stockholm. I personally swim almost everywhere around the city with friends during almost the whole summer, just need to check beforehand on Havs och Vattenmyndigheten daily updates if any bathing areas have been considered inappropriate that day [0].

Most areas are considered appropriate for most of the summer in the inner city, while lakes are everywhere outside the small centre, and quite many are private enough that you can find spots to go skinny dipping without a second thought.

[0] https://www.havochvatten.se/badplatser-och-badvatten/kommune...


Some popular in inner city: Tanto strandbad, Långholmsbadet, Smedsuddsbadet, Fredhällsbadet, Kristinebergsbadet, Brunnsviksbadet. Plenty of folks swim during hot periods in summer.


Yes, we went to Tanto but considering how much water that Stockholm is surrounded with, I can’t say it’s very accessible. That is also the view of some friends living there.


If it's a beach that's appropriate for smaller kids you're after with shallow water, yes you probably have to go to the more prepared beaches with sand that has shallow water a bit out.

Many places in central Stockholm have been built out from natural beaches by claiming water so the water will be deep at those points (probably held quays that was used for transports before trucks took over), but even in many places like that nobody would bat an eye if someone took a non-nudist swim.

Apart from the extremes most local teens and upwards will find cliffs by the water, as mentioned not appropriate for smaller kids and seniors as it's adjacent to deep water but perfect otherwise.


Why do we always have hardening guides? Ain’t there any OS where an easening/loosening guide is needed instead?


At one point, SELinux being on by default made one of the Red Hat distros a pain. This high-friction first impression cost them some adoptions, when an IT manager did a test install. A "softening guide" might've helped.


The softening guide was one command to put SELinux into permissive mode, and those managers couldn't even handle that.


IIRC, managers were qualified sales leads who were actively looking to move to a supported Linux platform, but got turned off by the installer&docs out-of-box experience that seemed like it was going to make a lot of extra work for them.

I just meant the "softening guide" might've helped from the perspective of the company who'd like to land those customers. I don't think it's the best way, but at the right moment it might've salvaged some sales.


You may be describing OpenBSD ("secure by default") and its FAQ (how to do what you want with it). The OP's hardening guide might be largely seen as going to greater lengths than most people need. (I use its advice about umask, though.)


Good point!

I sometimes wish there was a straightforward (and not causing reduced functionality) way of configuring a normal Linux distro in 'single user' mode.


So our small company has like foo-bar.com Foobar.com has been occupied for a long time, there’s nothing there and they do not respond to our emails either.

This has been the case for a few years now, I guess it works but ideally we would really like the one without the hyphen


We just did the Cyber Essentials as we have no certifications and a customer in UK required this. Then we thought we could go ahead to get the Cyber Essentials Plus as well. At least we can now say we have had a cybersecurity audit from an external party - hopefully it will put some more trust to these endless questionnaires.


So much this


Have cycled from Northern Europe to South Africa, crossed Europe several times and also crossed US, New York to California. Best way to experience a country.

Don’t have time? Try to do a quick one and turn it into an adventure. One of my most exciting trips was going from Southern to Northern Europe in just a week.

I recommend to pack lightly, and don’t spend to much time planning on gear and routes - just do it otherwise you’ll risk to postpone it again and again.. just do it.


> One of my most exciting trips was going from Southern to Northern Europe in just a week.

Now that is impressive. Do you remember your route?


Yes this is one concern. Are you sure it was a result of using non ECC mem and how did you find out it was because of that?


We could never be absolutely sure, due to the true Heisenbug nature of the behavior, but after tons of code audits and the observation after reverse proxy traffic analysis that it only occurred on processing by the non-ECC hosts, and never on the ECC hosts, that it was the most likely culprit.

The fact that the errors were single bit errors also strongly pointed in that direction.


We do not do any disk operations at all


Ohkk, just see how the network stats on both setups are. How are you testing the remote env? Is the traffic from local env or same cloud env?


It’s written in Rust and uses Rayon to a big extent. It’s receiving data to crunch maybe once every 5 minutes


the msg from dragontamer to set up a profiler seems like one approach to diagnose this

and also from joshdev to try aws graviton, which is also arm based but potentially more suited for cloud hosting than an m1

if you figure this out, definitely write it up -- very cool tech blog topic, most people never get to debug cpu architecture firsthand


It does not consume much memory but do lots of allocations/deallocations. No disc operations whatsoever.


M1 has a larger L1 cache, but smaller L3 cache.

It could very well be that your application is hitting a memory pattern that favors larger L1 cache, while the huge L3 cache of EPYC is not useful.

------

If you really wanted to know, you should learn how to use hardware performance counters and check out the instructions-per-clock. If you're around 1 or 2 instructions per clock tick, then you're CPU-bound.

If you're less than that, like 0.1 instructions per clock (ie: 10 clocks per instruction), then you're Cache and/or RAM-bound.

-----

From there, you continue your exploration. You count up L1 cache hits, L2 cache hits, L3 cache hits and cache-misses. IIRC, there are some performance counters that even get into the inter-thread communications (but I forget which ones off the top of my head). Assuming you were cache/ram bound of course (if you were CPU-bound, then check your execution unit utilization instead).

EPYC unfortunately doesn't have very accurate default performance counters, and I'd bet that no one really knows how to use M1 performance counters yet either.

While the default PMC counters of AMD/EPYC are inaccurate (but easy to understand), AMD has a second set of hard-to-understand, but very accurate profiling counters called IBS Profiling: https://www.codeproject.com/Articles/1264851/IBS-Profiling-w...

Still, having that information ought to give you a better idea of "why" your code performs the way it does. You may have to activate IBS-profiling inside of your BIOS before these IBS-profiling tools work.

By default, AMD only has the default performance counters available. So you may have a bit of a struggle juggling the BIOS + profiler to get things working just right, and then you'll absolutely struggle at understanding what the hell you're even looking at once all the data is in.


This.

I have dabbled with the AMD & Intel Xeon side of this, but never on MacOS. Do you have an idea how one would go about getting performance counters on MacOS? IPC, L1hit/miss, L2 hitless etc.


Unfortunately not. I only have experience on the AMD-side as I played around on my own personal computer.


Thanks, appreciated!


I’d suggest investigating single core performance. If you have the money, buy an i9-12900K (slightly faster single-core than M1 but much hotter) and do some testing on that. If my theory is correct, performance will be even better.


We have examined that as well, last week we tried a AMD 5950X which has half the amount of cores but much better single core performance - the result was still at 60% of the Epyc performance


What was the M1 % relative to your Epyc?


Roughly 10% faster


Have you investigated memory constraints?

Ryzen is 2 channels; Epyc is 4-8 (depending on CPU). M1 has that stupidly fast/wide setup.

If your Epyc is one of the 4 channel optimized SKUs or is only running in 4 channel mode, you would get pretty close to the quoted ratios on a memory bandwidth test.

Correlation, not causation, but worth looking into.


Also check Node per Socket (NPS) settings on EPYC


HN makes us wait for replies… so if we need to continue this further I’m open at muse.theses-0z@icloud.com .

My next question would be if you ran the 12900K in dual-channel memory.


As others have noted this sounds like a contention issue that you should fix by not allocating in your hot path if at all possible. The easiest fix would probably be to try to switch out your global allocator for something like https://github.com/gnzlbg/jemallocator and see if that doesn't give you a nice performance boost.


Hmm, yes we are already using jemallocator actually


It sounds like you might be running into some sort of contention.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: