Hacker Newsnew | past | comments | ask | show | jobs | submit | m1245's commentslogin

Technically, it’s not much different from using Ansible to run Docker on remote hosts.

What it provides is a set of conventions based on what most web apps look like.

Eg. built-in proxy with automatic TLS and zero downtime deployments, first-class support for a DB and cache, encrypted secrets, etc.

It’s definitely not for every use case, but for your typical 3-tier monolith on a handful of servers I found it does the job well.


Hey, I'm the guy behind GetDeploying. To be clear:

This is a side project and the vast majority of companies were added by me without being paid for it. I now charge to get listed or add a banner because the site takes too much time for me to maintain.

And totally agree, AMD GPUs are not covered enough. Happy to list your company at no cost to help me fix that. Feel free to email me if interested.


Hey, thanks for the response. I'm currently not interested in being listed on a paid site like this, even if the offer is free. I don't think that is what the community needs.

What good is any of this other than driving clicks for your benefit? If I'm going to get any traffic from your site, it is all going to be driven by people just searching or quoting comparisons, not actual sales.

For example, right now, you list another MI300x provider. Right at the top of the page you parrot their bogus claims about 20k GPUs by 2024. They don't have pricing, it is just "contact us". "Based on our records, XXX has at least 2 data center locations around the world"... yet it lists both of them in the US, not "around the world". I could go on and on, but what I know is that I don't want to be associated with something like this.

Sorry for the truth bomb, but if it is taking too much time for you to maintain, you should shut it down or find someone else willing to maintain it properly. Having incomplete and bogus data isn't helpful for anyone.


Thanks for the feedback. There's certainly room for improvement.


I saw your original reply, good editing.


Hey I'm the author of the post.

We ran Panelbear for about a year before merging it into Cronitor's codebase.

Cronitor is also a Django monolith but on EC2 and was already humming along nicely for several years through frequent traffic spikes.

We had no reliability issues with K8s, but came down to: as a small team we decided to have less moving parts.

The new setup is not too different from what I describe here: https://anthonynsimon.com/blog/kamal-deploy/


Thanks, that makes sense. I'll update this so it's clearer.


Awesome, thanks and sorry for the pedantism.


Just a few days ago I learned that iPhones come with a built-in image scanner in the notes app.

Interestingly: It was a senior citizen in a local shop who pointed that to me when I needed to scan some documents for him.


Definitely agree.

I think the problem is that it’s usually easier to arrive at complex solution than a simple one.


Always work with the following in mind:

Il semble que la perfection soit atteinte non quand il n'y a plus rien à ajouter, mais quand il n'y a plus rien à retrancher.

It seems that perfection is attained not when there is nothing more to add, but when there is nothing more to remove.


Saint-Exupery?


Indeed.


Wind, Sand and Stars just may be my 'Desert Island Book'. Probably not as well known in the English speaking world as it deserves to be.


Healthchecks is a great service!

Not sure if you tried it too but https://cronitor.io/ supports more complex alerting rules like the one you describe.

As a bonus, you can also create uptime checks and status pages under the same roof.

Full-disclosure: I work at Cronitor, happy to help if you have any questions :)


Hey! I'm the author of the post, glad you enjoyed it!

Yes, you would just attach it to the network switch and have it join the cluster.

The control plane will "discover" the additional capacity and redistribute workloads if necessary.

With k3s, a new node can join the cluster with a single command (assuming you have the API token).

Something like:

  $ curl -sfL https://get.k3s.io | K3S_URL=https://$YOUR_SERVER_NODE_IP:6443 K3S_TOKEN=$YOUR_CLUSTER_TOKEN sh -


One disadvantage of k3s, is that it does not have HA control plane out of box (specifically, users are expected to bring their own HA database solution[1]). Without that, losing the single point of failure control plane node is going to give you a very bad day.

I use kubespray[2] to manage my raspberry pi based k8s homelab, and replacing any nodes, including HA control plane nodes, is as easy as swapping the board and executing an ansible playbook. The downsides of this, are that it requires the users to have more knowledge about operating k8s, and a single ansible playbook run takes 30-40 minutes...

1. https://rancher.com/docs/k3s/latest/en/installation/ha/

2. https://github.com/kubernetes-sigs/kubespray


> One disadvantage of k3s, is that it does not have HA control plane out of box

This hasn't been true for a while, since these days K3s ships with etcd embedded: https://rancher.com/docs/k3s/latest/en/installation/ha-embed...


Thanks for the info. I haven't been following k3s development after I decided to switch to kubespray. Glad that this concern has been addressed. Nice work!


Denis, first of all: really cool product!

It’s also fair to ask for registration in order to prevent abuse / record consent for terms and privacy.

However, it would be best if you asked that before uploading the photo. Many people seem frustrated that their data was processed and now they must create an account to see the result/delete the data.

Once again, it looks great, it’s just an expectation/transparency issue :)

Edit: If it’s ok for you to disclose this, what kind of infra are you running on? Just curious.


We will change the way registration works on Portrait HD today, thank you again for your feedback :) It will ask to register before the upload itself

We are on Amazon architecture right now with 15+ servers, 5-20 minutes only because there is a 600 tasks in the processing line and it's quite long, we have launched it today

Thanks again for warm feedback :)


I'm also interested in this, and someone recently shared this with me: https://stripe.com/newsroom/news/taxjar

Apparently Stripe's acquisition of TaxJar is meant to expand their offerings in sales tax compliance.

From the article:

As the latest addition to Stripe’s revenue platform, TaxJar will help businesses automate tasks such as:

- Providing accurate sales tax rates at checkout, tied to the exact street address of the customer.

- Automatically submitting tax returns to local jurisdictions and remitting the sales tax collected.

- Producing local jurisdiction reports to show sales and sales tax collected—not only for each state, but for relevant counties, cities, and other special jurisdictions.

- Evaluating a company’s products and intelligently suggesting the right product tax code.


We have a number of exciting tax-related launches with Stripe coming soon. Drop me a note at jackerman@stripe.com and happy to give you early access.


"exciting" tax-related ...

You keep using that word, I do not think it means what you think it means. ;)


This sounds great! We switched to Paddle (https://paddle.com/) so we didn't have to do tax compliance ourselves. Might switch back...


Drop me a note at jackerman@stripe.com and happy to give you a sneak peak about some cool taxes-related features we're launching very soon.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: