Hacker Newsnew | past | comments | ask | show | jobs | submit | vergessenmir's commentslogin

Package situation on anything that isn't Arch (and I think Fedora) is pretty rough. I installed it from source. It helps that it is a rust application and was up and running in no time


See my comment above (moved from i3wm) but my spec is

RTX 3090, Pop OS 24.04 (beta), 4K 43" Monitor,

Nvidia cards worked out the box with no problems


Moved onto Niri yesterday after having to reinstall my PopOS and it just clicked. Like i3wm did all those years ago.

I can focus for hours on end and spend zero mental energy on resizing a window. I had less of that with i3wm but you had to always readjust after a few windows were tiled to your workspace. That final bit of cognitive overload was removed with Niri.

EDIT: Spec: RTX 3090, Pop OS 24.04 (beta), 4K 43" Monitor,

Niri Installed from cargo build, super easy install, make sure you install xwayland-satellite so that you can run VS Code, Obsidian, Zoom, Blender and other strictly X11 applications


Harbor has its pain points but it is infinitely easier to get up and running compared to crufty Artifactory.

One glaring omission is lack of support for proxy docker.io without the project name i.e pulling nginx:latest instead of /myproject/nginx/nginx:latest

The workaround involves URL rewrite magic in your proxy of choice


I don't get it, what does this do?


On most projects you end up wiring up a bunch of custom logic to handle your config, which is often injected as environment variables - think loading from a secure source, validation logic, type safety, pre-commit git scanning, etc.

It's annoying to do it right, so people often take shortcuts - skip adding validation, send files over slack, don't add docs, etc...

The common pattern of using a .env.example file leads to constant syncing problems, and we often have many sources of truth about our config (.env.example, hand-written types, validation code, comments scattered throughout the codebase)

This tool lets you express additional schema info about the config your application needs via decorators in a .env file, and optionally set values, either directly if they are not sensitive, or via calls to an external service. This shouldn't be something we need to recreate when scaffolding out every new project. There should be a single source of truth - and it should work with any framework/language.


Is there any notable properties of this implementation, are some parts slower, faster etc


get work done, look younger and slice off the first 5 years of your experience because it is "not relevant". I look about 10-12 years younger so I am able to slip under the radar but it makes me wonder how my peers who look visibly their age fare? The job market is london, sector: Hedge funds, asset mangement tech etc.


Your application and resume will go through multiple systems and viewed and assessed by 3 to 5 people before anybody can even see your (presumably younger) face.

In the resume, you can remove your experience prior to certain date (which is recommended anyways as it makes your resume shorter and very few employers care what you did 15 years ago), but you do have to list your education and most ATS systems require graduation dates.

Even if you do manage to omit the dates, if you were senior developer or architect in the very the first position from 12 years ago that you have on your resume, the recruiter/hiring manager will be able to put 2 and 2 together.

All the advice I've heard from the career/job search/whatever coaches/advisors was to not try to hide your age but use it your advantage. Trying to hide your age doesn't just make you looking silly, it might create an impression that you're hiding something more serious.

But yeah, looking younger and healthier definitely doesn't hurt, not just in job search but in life in general.


Do you list the year you graduated on your CV?


I do, but if they're being ageist, it means less because I didn't finish my degree until I was 32 because I wasted the first 9 years of my adult life before starting college.


Go is great for concurrency. Not quite there for agent support. The problem isn't performance or message passing it's the agent middleware i.e logging, tracing, retries, configuration

You need a DSL either supported in the language or through configuration. These are features you get for free in python and secondly JavaScript. You have to write most of this yourself in go


some writes might fail, you may need to retry, the data store may be temporarily available etc.

There may be many things that go wrong and how you handle this depends on your data guarantees and consistency requirements.

If you're not queuing what are you doing when a write fails, throwing away the data?


As bushbaba points out, the same things may happen with Kafka.

The standard regex joke really works for all values of X. Some people, when confronted with a problem, think "I know - I'll use a queue!" Now they have two problems.

Adding a queue to the system does not make it faster or more reliable. It makes it more asynchronous (because of the queue), slower (because the computer has to do more stuff) and less reliable (because there are more "moving parts"). It's possible that a queue is a required component of a design which is more reliable and faster, but this can only be known about the specific design, not the general case.

I'd start by asking why your data store is randomly failing to write data. I've never encountered Postgres randomly failing to write data. There are certainly conditions that can cause Postgres to fail to write data, but they aren't random and most of them are bad enough to require an on-call engineer to come and fix the system anyway - if the problem could be resolved automatically, it would have been.

If you want to be resilient to events like the database disk being full (maybe it's a separate analytics database that's less important from the main transactional database) then adding a queue (on a separate disk) can make sense, so you can continue having accurate analytics after upgrading the analytics database storage. In this case you're using the queue to create a fault isolation boundary. It just kicks the can down the road though, since if the analytics queue storage fills up, you still have to either drop the analytics or fail the client's request. You have the same problem but now for the queue. Again, it could be a reasonable design, but not by default and you'd have to evaluate the whole design to see whether it's reasonable.


Some Kafka writes might fail. Hence the Kafka client having a queue with retries.


I'd probably check this out in my home lab but as a corporate user the offering of discord as a support channel makes me nervous.

Discord is predominately blocked on a corporate networks. Artifactory ( & Nexus) are very common in corporate environments. Corporate proxies are even more common. This is why I'd hesitate. These are common use cases (albeit corporate) that may not be readily available in the docs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: