Hacker Newsnew | past | comments | ask | show | jobs | submit | more kayson's commentslogin

Or use shellcheck: https://www.shellcheck.net/


Tl;dr: Use both because they aren't mutex.

Shellcheck isn't a complete solution, and running -e mode is essential to smaller bash files. Shellcheck even knows if a script is in -e mode or not.


The Thunderbird Pro Add-on Repo [1] doesn't really make it clear - if I want to self host Appointment and Send, do I need to build the addon myself and change the endpoints? Or is there some kind of config?

1. https://github.com/thunderbird/tbpro-add-on


That's me. But now everything is done automagically by nzbget and I use nanazip on my Windows desktop.


After some research, it seems much easier to just back up the Proxmox config (and VM disk images, if they're needed) than to define or deploy Proxmox VMs with OpenTofu or ansible.

https://pve.proxmox.com/wiki/Proxmox_Cluster_File_System_(pm...


> According to Cadence’s admissions and court documents, employees of Cadence China did not disclose to and/or concealed from other Cadence personnel, including Cadence’s export compliance personnel, that exports to CSCC were in fact intended for delivery to NUDT and/or the PRC military. For example, in May 2015, a few months after NUDT was added to the Entity List, Cadence’s then-head of sales in China emailed colleagues, cautioning them to refer to their customer as CSCC in English and NUDT only in Chinese characters, writing that “the subject [was] too sensitive.”

Interesting. Sounds like Cadence China employees went rogue. Nonetheless, Cadence USA is on the hook.


Cadence China, a wholy controlled subsidiary of Cadence Design Systems went rogue, and Cadence Design Systems is on the hook.


> EDA tools constantly need to "phone back home" to load updates and validate licenses

This isn't true in my experience. Cadence, Synopsys, and Siemens tools all use local license files or license servers (mainly FlexLM). Updates are just downloaded from their website.


What do they need so much capital for?


My guess is scaling up their ability to manufacture hardware.


I think the downvoting on you is a little harsh. TFA does allude to it, but doesn't explicitly answer your question. I presume the implicit answer is here:

> With growing customer enthusiasm, we were increasingly getting questions about what it would look like to buy a large number of Oxide racks. Could we manufacture them? Could we support them? Could we make them easy to operate together?

i.e. they need the capital in order to be able to satisfy large orders on sane timeframes - but that's very expensive when you're a hardware business.


Thanks. It was a genuine question but I guess I can see how it might be taken otherwise.


They are a hardware company. Hardware costs a lot of money to innovate and build on.


The thing that always gets me about backup consistency is that it's impossibly difficult to ensure that application data is in a consistent state without bringing everything down. You can create a disk snapshot, but there's no guarantee that some service isn't mid-write or mid-procedure at the point of the snapshot. So if you were to restore the backup from the snapshot you would encounter some kind of corruption.

Database dumps help with this, to a large extent, especially if the application itself is making the dumps at an appropriate time. But often you have to make the dump outside the application, meaning you could hit it in the middle of a sequence of queries.

Curious if anyone has useful tips for dealing with this.


I think generally speaking, databases are resilient to this so taking a snapshot of the disk at any point is sufficient as a backup. The only danger is if you're using some sort of on-controller disk cache with no battery backup, then basically you're lying to the database about what has flushed and there can be inconsistencies on "power failure" (i.e. live snapshot).

But for the most part as especially in the cloud, this shouldn't be an issue.


Beware that although databases are resilient to snapshotting, they're not resilient to inconsistent snapshots. All files have to be snapshotted at the exact same moment, which means either a filesystem-level or disk-level snapshot, or SIGSTOP all database processes before doing your recursive copy or rsync.

Some databases have the ability to stop writing and hold all changes in memory (or only append to WAL, which is recursive-copy-safe) while you tell it you're doing a backup.


It's not clear if there are other places that application state is being stored, outside your database, that you need to capture. Do you mean things like caches? (I'd hope not.)

pg_dump / mysqldump both solve the problem of snapshotting your live database safely, but can introduce some bloat / overhead you may have to deal with somehow. All pretty well documented and understood though.

For larger postgresql databases I've sometimes adopted the other common pattern of a read-only replica dedicated for backups: you pause replication, run the dump against that backup instance (where you're less concerned about how long that takes, and what cruft it leaves behind that'll need subsequent vacuuming) and then bring replication back.


I wonder when this will make it into pfsense... The transition to kea has been a bit of a mess with tons of bugs. Thankfully it's controlled by an option, and it seems like 2.8.0 knocked out quite a few of them


I have been using Kea on pfSense CE for a long time — I think it was version 23.0.x. Or you mean 3.0 in particular? I also have OPNsense and I am not completely convinced of their aggressive update strategy yet. For a firewall, I prefer stability over features. Jumping to the newest releases every month can have tradeoffs.

Note: in general, both OPNsense and pfSense are excellent. I have never had any problems with either one.


I use pfSense CE, and rely on DNS entries to be automatically created for DHCP addresses. That worked fine for more than a decade, until they made Kea the default a couple of years ago (or did they just put a bunch of notices in the interface that old DHCPd was deprecated? It's been long enough that I don't remember).

Anyway, at the time Kea (at least in pfSense) wasn't able to do that, which caused things to break for me for a bit. It's a small thing (and, I mean, totally fair with free software) but the fact that they pushed an update to Kea before Kea (again, at least in pfSense) was at feature parity rubbed me the wrong way and has kept me from using it since then.

(edit: on the off chance anyone cares, I decided to check and it looks like this issue has been fixed as of pfSense CE 2.8.)


Is opnsense ahead for this then? Or same


I don't follow pfsense too much but my understanding is OPNsense typically brings in package updates faster as they have a more frequent update cycle. I can't speak too much to bugs as I haven't migrated to Kea but imo some core functionality wasn't there until recently. And Dnsmasq seems like a better fit for me anyway, which is where I'll migrate to.

From the 25.1.6 OPNsense May update notes:

> Last but not least: Kea DHCPv6 is here. And with it full DHCP and router advertisement support in Dnsmasq to bridge the gap for ISC users who do not need or want Kea. We are going to make Dnsmasq DHCP the default in new installations starting with 25.7, too. ISC DHCP will still be around as a core component in 25.7 but likely moves to plugins for 26.1 next year.

https://docs.opnsense.org/releases/CE_25.1.html#may-08-2025


I've been using it on opnsense since the first version it was released in. I aggressively switched because wanted to ditch my weird setup to do multi subnets (forwarding though a l3 switch). Haven't had any issues.


I've tried a few times to switch to Kea in PFsense but it crashes my network fiercely.


By that logic, so is using 16 bits and 44khz sampling rate.


16-bit 44 Khz almost perfectly reproduces human hearing. It wasn't a coincidence that the makers of CDs chose it. Anything above is studio-grade stuff to give extra headroom for editing (applying filters in studio editing can amplify noise which is unwanted, for just playing audio there are no advantages).

With standard Bluetooth codecs you get nowhere close to that and there is a significant noticeable delay for video content. Headphone jack is easy to make IP68. All rugged phones have it and all non-rugged ones have a USB port which is bigger and more irregular than a frigging circle.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: