Hacker Newsnew | past | comments | ask | show | jobs | submit | wreath's commentslogin

I dont know why this is downvoted. Even though the vast majority of startups fail, the outcome of an attempt is an experience thats much more valuable than any class you attended at MIT in the eyes of future employers. It would also equip you with valuable business perspective that you wouldnt have had if you started a regular tech job. I mean, is this worse than continuing to apply to jobs only? Come on HN

> Integration testing is of course useful, but generally one would want to create unit tests for every part of the code, and by definition it's not a unit test if hits multiple parts of the code simultaneously.

The common pitfall with this style of testing is that you end up testing implementation details and couple your tests to your code and not the interfaces at the boundaries of your code.

I prefer the boundary between unit and integration tests to be the process itself. Meaning, if I have a dependency outside the main process (eg database, HTTP API etc) then it warrants an integration test where i mock this dependency somehow. Otherwise, unit tests test the interfaces with as much coverage of actual code execution as possible. In unit tests, out of process dependencies are swapped with a fake implementation like an in-memory store instead of a full of fledged one that covers only part of interface that i use. This results in much more robust tests that I can rely on during refactoring as opposed to “every method or function is a unit, so unit tests should test these individual methods”.


Yeah I think that's a question of using the right tool for the job. Some projects are of a size that it's not really necessary to be more fine-grained, but as the number of moving parts increases, so too in my experience does the need to ensure those parts are individually working to spec, and not just the whole thing. A classic example might be something like a calculator that enshrines a complex piece of logic, and a piece of code that uses it. I would test both of those in isolation, and mock out the calculator in the second case so that I could generate a whole range of different return values and errors and prove that the calling code is also robust and behaves correctly. Separating them like this also potentially reduces the number of tests you need to write to ensure that you hit all possible combinations of inputs and outputs.


> Take AWS RDS. Under the hood, it's:

    Standard Postgres compiled with some AWS-specific monitoring hooks
    A custom backup system using EBS snapshots
    Automated configuration management via Chef/Puppet/Ansible
    Load balancers and connection pooling (PgBouncer)
    Monitoring integration with CloudWatch
    Automated failover scripting
I didn't know RDS had PgBouncer under the hood, is this really accurate?

The problem i find with RDS (and most other managed Postgres) is that they limit your options for how you want to design your database architecture. For instance, if write consistency is important to you want to support synchronous replication, there is no way to do this in RDS without either Aurora or having the readers in another AZ. The other issue is that you only have access to logical replication, because you don't have access to your WAL archive, so it makes moving off RDS much more difficult.


> I didn't know RDS had PgBouncer under the hood

I don't think it does. AWS has this feature under RDS Proxy, but it's an extra service and comes with extra cost (and a bit cumbersome to use in my opinion, it should have been designed as a checkbox, not an entire separate thing to maintain).

Although, it technically has "load balancer", in form of a DNS entry that resolves to a random reader replica, if I recall correctly.


I ran into the exact same problem few weeks ago too with around 1k partitions but they were small. Ended up adding cronjob to run analyze on the partitioned table (not the partitions!) once a day. I hope this gets fixed in future version of PG.


This has been my experience too. At the end of each session, i’m left very exhausted mentally without full understanding of what I just did, so I have to review it again.

Coding this way requires an effort that is equal to both designing, coding, and reviewing except the code i review isnt mine. Strange situation.


I feel the same way. Im on the job market (though still employed) and i can tell you my core skills have degraded since using LLMs last year or so where technical (non leetcode!) interviews are now more challenging to me since i forgot how to make all these small decisions (eg should this be a private class property or public).

I decided to just disable codepilot and keep my skills sharp i know we will be called back to clean up the mess. Reminiscent of offshoring in the 2000s


In an industry where most people stay for around 2 years (at least pre 2022), people arent even there to see the results of their decisions.


Do you have any data to compare risk of injuries of powerlifting vs other sports?


We do Kanban every sprint.


What does the accumulation of muscle strength have to do with stretching?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: