Hacker News new | past | comments | ask | show | jobs | submit | dracyr's comments login

Not sure about the ones mentioned in the article, but for the kind I'm used to (i.e Bäsk) in Sweden it's a given.

In our family it's generally been a tradition to go out in the night of August 24th each year to pick some wormwood, and then infuse some plain alcohol with it to have for the coming months. We generally don't leave it in as long recipes call for though, 24h instead of multiple days so the taste is a bit milder.


Anything special about August 24th that makes it the day to do this?


Its the day when all farmers should be done harvesting and autumn officially begins according to "Bondepraktikan" [1], which says to be done by St Bartholomews day.

Like many old traditions the reasons have for many become lost to time, and now it's an accepted fact that that's the magical night to get some wormwood.

[1] https://en.m.wikipedia.org/wiki/Old_Farmer%27s_Almanac


Yeah, it's always so cool looking. The device is using a CRT vector display, so instead of the CRT drawing each pixel row line by line, each shape on the screen is drawn one by one as small line segments. Curves are also possible, but you'd have to formulate the vector shape for it, which is harder than for straight lines.

It also looks even cooler in person, as the refresh rate is also really good due to the CRTs, if there's an old arcade close with Asteroids or similar early vector games I'd really recommend going to see it.


Never had the chance to use Quickwit at a $DAYJOB (yet?), but I really appreciate the fact that it scales down quite well too. Currently running it on my homelab, after a number of small annoyances using Loki in a single-node cluster, and it's been working very well with very reasonable resource usage.

I also decide to use Tantivy (the rust library powering/written by Quickwit) for my own bookmarking search tool by embedding it in Elixir, and the API and docs have been quite pleasant to work with. Hats of to the team, looking forward to what's coming next!


Ah Loki, I wanted to try it at my homelab bit it wasn't as simple as it says. Now I wanted to try Zincsearch or Openobserve. Have you tried that?


You might want to have a look at SigNoz [1] as well. We have also published some perf benchmark wrt Elastic & Loki [2] and have some cool features like logs pipeline for manipulating logs before ingestion

[1] https://github.com/signoz/signoz [2] https://signoz.io/blog/logs-performance-benchmark/


in case it matters to others, https://github.com/openobserve/openobserve/tree/v0.7.0 is the last Apache2 licensed copy before they went AGPL with 0.7.1

https://github.com/openobserve/openobserve/blob/v0.7.0/.env.... is some "onoz" for me, but just recently someone submitted https://github.com/aenix-io/etcd-operator to the CNCF sandbox so maybe things have gotten better around keeping that PoS alive


> it wasn't as simple as it says

mind elaborating? we built loki for some pretty massive scale but I've always tried to make it work at super small scale to. what went wrong?


I use OpenObserve and I quite enjoy it


Tantivity is great!

Here is a postgres extension that uses it to provide full text search

https://blog.paradedb.com/pages/introducing_bm25

https://news.ycombinator.com/item?id=37557127


tantivy, not tantivity!!!!!


Some companies are using it with AWS Lambda to scale to 0.


https://pv.wtf

Just got started this year so it's only got two posts so far. One on logging and the other on config languages. I'd like to spend a bit more time writing but still need to build it into a habit.


I think there's nothing currently that combines both logging and metrics into one easy package and visualizes it, but it's also something I would love to have.

Vector[1] would work as the agent, being able to collect both logs and metrics. But the issue would then be storing it. I'm assuming the Elastic Stack might now be able to do both, but it's just to heavy to deal with in a small setup.

A couple of months ago I took a brief look at that when setting up logging for my own homelab (https://pv.wtf/posts/logging-and-the-homelab). Mostly looking at the memory usage to fit it on my synology. Quickwit[2] and Log-Store[3] both come with built in web interfaces that reduce the need for grafana, but neither of them do metrics.

- [1] https://vector.dev - [2] https://quickwit.io/ - [3] https://log-store.com/


Nice experiment.

Side note: it should be possible to tweak some config parameters to optimize the memory usage or cpu usage of quickwit. Ask us on the discord server next time :)


Thanks!

Yeah, I was a little bit surprised it was so close. And I've been using tantivy (the library which powers quickwit afaik) in another side project where it used comparatively less.

Might jump in there then as an excuse to fiddle a bit more with the homelab soon then :)


You should try OpenObserve. It combines logging, metrics and dashboards (and traces) in one single binary/container



Does using vector commit you to DataDog in any way?


Not at all.


Thanks!

Nickel was among the other languages that popped up in my search, but I sadly didn't have enough time to deep dive in.

I'm also on the verge of experimenting with switching out the HCL to something else for my homelab setup. It currently uses terraform and nomad, and it's fine, but I always have a feeling there should be something more ergonomic.


Now, I use vim mode in Sublime, so I have already "seen the light" so to say. But that situation is probably one I'd actually be able to solve faster with multiple cursors.

What I'd do is this: Put the cursor on one of the calls, then use ctrl-d to put a new cursor at all the matches. Then use ctrl-right-arrow to move by word until the second comma, combine that with shift to select the argument and edit all of them with visual feedback to my heart's content.

In reality, depending on the complexity of the text, after creating all the cursors, I'd probably use the vim movements to get them to where I need though.


I think they extended it fairly recently, I also live in Neukölln and fell outside of the delivery area before Christmas at least. But I did a quick check after reading the article, and now everything seems to be working.


Luck you. I checked here (https://wolt.com/en/discovery) and they still don't seem to deliver (Richardkiez). Maybe I need to check in the app?


Yeah, I instantly know which company you're talking about...

I know a couple of developers there and as far as I understood all applicants have to do the IQ test. They also thought it was ridiculous, but the CEO really really likes them, so it stays.


It is basically an issue with the thermal capacity of the chip, larger size means more heat.


This right here is the big one, to be honest. Yes, clocks, latency between parts of the die, etc are all problems - but they can be worked around with some effort.

Thermals and also power delivery are huge problems with large chips, just compare the massive 471 mm2 die of the GP102 (1080 Ti/Titan X) to the 150 mm2 die of the Coffee Lake hexacore chips. GP102 can draw 250-300W depending on boost clock, the Core i7-8700K can also draw upwards of 200W depending on how high you push the clocks and vCore (to keep said clocks stable).

There's a reason why board-partner GPU's always have huge coolers attached to them, and why people pushing CPU clocks are often using at least a giant air cooler like the Hyper 212 EVO or an AIO liquid cooler with a 240mm+ radiator.

Hell, let's skip thermals and just talk electricity - getting 200W+ of stable power to the cores on these dies is no easy task as-is, that's why you have people like buildzoid doing reviews of power delivery on motherboards and GPU boards to see if VRM's are going to blow up trying to power your expensive hardware if you're overclocking (or sometimes even if you aren't).

All in all, we have thermal and power scaling issues at current chip sizes - making them bigger isn't particularly feasible unless everybody is going to start installing 360mm radiators in their system and even that might not be enough depending on clock speeds and the vCore required to maintain them.


8700K pushes 130W at 5ghz all Core OC under full AVX load without offset even uner LN you won’t get 200W from a 8700K.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: