Hacker Newsnew | past | comments | ask | show | jobs | submit | more paffdragon's commentslogin

On one machine I have offlineimap.py[1] (with mutt), on the other laptop Evolution[2] that archives my mail locally that I can also export and back up regularly.

[1]: https://www.offlineimap.org/

[2]: https://gitlab.gnome.org/GNOME/evolution/-/wikis/home


Ok, interesting. I checked the docs of offlineimap (thanks, I hadn't found that one when googling!) and it looks like I could use the `maxage` option for the incremental option (I think I want to create a folder for each week, back it up , and delete it after a while).

Do you have anything set up with evolution to handle things automatically?


Sorry, my laptop where I had evolution got a new OS installed and I haven't configured it yet and just started mutt, but my original setup had a local archive folder, with sub-folders per year (I think, I even had a higher level grouping of 5 or 10 years). Unfortunately, I don't have setup right now to check, but I think it was semi-manual, like Evolution was archiving to a local folder automatically, then every new year I just moved the mails from the previous year into a folder).

Re offlineimap, just looked into isync/mbsync suggested here by others, it seems better from the description, I'm probably going to try it when I have some time: https://people.kernel.org/mcgrof/replacing-offlineimap-with-...


I also have an old Gmail account that I don't use directly anymore. Instead of POP, I set it up to forward everything to my mailbox.org account. It works for me this way for several years now. The only issue is that I don't get the spam forwarded, so you can't see if there are false positives. You can still see it in Gmail if you occasionally log in. For me, since this is a rarely used account, the spam I get is usually indeed spam, so I don't miss it. It might be different for a more active account, though.


Cool thanks, I might set that up as well. And did you just set it up as an alias so you can send email from it as well?

(I don't actually know how often I send email from that address, so maybe I don't even need that, but just in case.)


For sending in Mailbox.org webmail there is a thing called alternative senders where I can add email addresses to send from. I have something similar on my Android K9 app, where it's under Manage identities where I can also add the allowed senders.

I set it up a long time ago this way and I'm unsure of I had to somehow configure Gmail to be able to send from my other account. But I have the same also for @live.com and @zoho.com as I was trying different providers. This allows me to easily reply to forwarded emails with the old email addresses.

Edit: I think you got with the alias, didn't know this is what is called in Google https://support.google.com/mail/answer/22370?hl=en#zippy=%2C...


Wouldn't we still need a "Line Separator" character so we can include line breaks in fields and records for formatting purposes?


RS separates logical "lines" or there is "Group Separator", but for internal line separators in a multi-line text field, the CR and LF characters could be used, because they have "standard" meanings for text:

CR: move next character position to position 1 horizontally LF: move next character position to vertically below current character position, potentially inserting SP characters.

So a combination of CR/LF, or LF/CR would have the effect of "New Line".


Do you have any examples what tends to break? We used pyenv/rbenv/sdkman etc. individually, then moved to asdf and now arrived at mise. Not using yet for CI just developer stuff and so far didn't have issues. But this is quite recent for us, so didn't have to deal with upgrade issues yet.


We manage mise itself via homebrew. Sometimes when upgrading mise itself, it doesn’t seem to handle being upgraded gracefully, and loses track of installed runtimes even if we manually kick it in our upgrade scripts. Restarting the shell entirely seems to be the only way to fix it.

That, and with Ruby, Node, and at least one other language/tool IIRC, when support for those things moved internal, we had to make a bunch of changes to our scripts to handle that change with effectively no warning. That involved checking to see if the third-party plug-in was installed, uninstalling it if so, and then installing the language based on the built-in support. In the meantime, the error messages encountered were not super helpful in understanding what was going on.

I’m hopeful that these types of issues are behind us now that most of the things we care about are internal, but still, it’s been pretty annoying.


Isn't it too dystopian to have cars follow you around and report you to authorities? I can easily imagine some bad scenarios.


Yes it could potentially be very dystopian for human drivers. That doesn't mean it won't happen. Police departments could make a lot of extra money from the additional traffic tickets; there is a financial incentive for them to do this.


Making money from tickets is supporting the wrong behavior of trying to find excuses to ticket you for anything to get extra money - this is often leading to cops looking for cheap ways to get the extra cash where they can get it easily, instead of doing more important work where their chance to ticket you is lower even if more important for safety.


some providers accept crypto or even cash in an envelope


When I read the title I remembered how people in the 90s at my place built their lawn mowers. It was a new thing. My father welded the frame from scrap metal with the motor from a washing machine and some tiny wheels from an old baby stroller lol. It was kind of open source, many people copied or he helped build one. Haha, served us surprisingly well for a time :)


My uncle used a semi-DIY lawn mower for many years where he had replaced the original broken engine with an old electric drill. Worked fine enough.


I use it in docker on a NAS - VictoriaMetrics, VictoriaLogs, Grafana - low resource usage, fast, so far zero issues.


This isn't a good quiz. An example question (and there are many similar ones):

> When refactoring code, I prioritize:

> - Reducing complexity and coupling

> - Improving readability and maintainability

> - Optimizing performance and resource usage

> - Extracting reusable abstractions

Each refactoring has some goal, some driver behind it. It could be slow performance, unmaintenable mess, high coupling, too much duplication etc... Choosing a single answers makes no sense from a programming point of view. And this is the case most questions I have seen so far on the site.

EDIT: After finishing and seeing, I think I understand it a little better why was it structured like this. If you are open to do things differently, your answers probably won't weigh in any one direction in aggregate. But if you have certain biases, you might be leaning towards choosing similar answers that shows up in the end.


I finished it anyway:

Your Programming Philosophy

You value clarity and directness in code. You prefer explicit, step-by-step solutions that are easy to understand and debug, even if they require more lines of code. Abstract ↔ Concrete: 0 Neutral Human ↔ Computer Friendly: +6 Human-Friendly

The compass is almost in the middle, just a little up from center towards human friendly. That's fine, since most code you write is for other humans to read, the compiler is writing for the machine, only in critical perf sensitive paths you write for computer-first... The rest was mostly neutral, because what I wrote in the parent, it depends on the situation and it can go either way depending on the project.


That moves your API layer to the client library you need to distribute and build for your customers in programming languages they support. There are some cases where a thick client makes sense, but usually easier to do it server side and let customers consume the API from their env, it is easier to patch the server than to ship library updates to all users.


I think most of the discussion in this thread assumes that “customers” of the interface are other groups in the same organization using the database for a shared overarching business/goal, not external end user customers.

For external end users, absolutely provide an API, no argument here. The internal service interactions behind that API are a less simple answer, though.


It's definitely worse for external customers, of course. But it's still not that easy even for internal customers. The main problem is that usually the tables exposed are not meant to be public interfaces, so the team takes on an external dependency to their internal schema. And that other team could have completely different goals and priorities, speed and size, management and end users with different requirements. At some point the other team might start to ask the first team for adding some innocent looking fields to their internal table for them. Also first team might need to make changes to support their own service that might not be compatible with the other team. The other team making queries that are not in control of the team owning the DB, which could impact performance. If possible, it is better to agree on an API and avoid depending on internal implementations directly even for internal customers. There are always some exceptions, e.g. very close or subteams under same management and same customers could be fine. Or if the table in question was explicitly designed as a public interface, it is rare, but possible.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: