Hacker News new | past | comments | ask | show | jobs | submit | rcaught's comments login


> These days a great SDK, not a great API, is a hallmark, and maybe even a necessity, of a world class development experience.

IMO, you can't build a great SDK without a great API. Duct tape only goes so far.


I agree with you.

I actually don't see any value-add from SDKs that wrap HTTP requests. HTTP is a standard, and my programming environment already provides a way to make requests. In fact it probably provides multiple, and your SDK might use a different one from what I do in the project, resulting in bloat. And for what gain? I still need to look at docs and try my best to do what the docs are telling me to.

Now if it's a statically typed language then I kinda get it. Better IDE/lsp integration and all. But even then, just publish an OpenAPI spec and let me generate my own client that's more idiomatic with my project.


This is one of the sentiment that powers the Common Lisp ecosystem. There's already good data structures and functions in the standard (and quasi standard) library, why do you need to invent new ones? In other languages (Node JS), you take a library and it brings a whole kitchen with it.


I agree with the sentiment that great APIs are a prerequisite to great SDKs, but great SDKs are really about time saving. Consider AWS's API, which requires a specific signing mechanism. That is annoying to implement manually. In general, the common method of shared-secret passed via bearer token is pretty insecure. I hope to see that change over time, and SDKs can help facilitate that.


I like the middle ground where I generate the models from OpenAPI but stick to my preferred HTTP library of choice in the language I’m using.


But you _can_ put a good SDK in place to abstract away a terrible API.

I've done this at work to ease use for clients -- once they're happy with the SDK interface I can do whatever I want behind the scenes to shore up the API/backend without impacting those same clients and their OK SDK.


As the saying goes 'if you cannot solve it with duct tape, you did not use enough duct tape' ;)


Malicious access to these tokens is malicious access to the service.


Rails 8 will by default use the DB for cache, queues and WebSocket broadcasting - https://fly.io/ruby-dispatch/the-plan-for-rails-8/


though, sqlite will not be used for the websocket broadcasting




Make one program do no one thing well.


hahaha, do you even realize what else this company makes?


It looks like one you shouldn't implement.


May I ask why? Actually I never used a NoSQL one so curious.


The for most people somewhat counter-intuitive answer is that NoSql is very rigid. It is counter intuitive, because having no required schema up front appears to be more flexible, not less.

However, having your database not handle schema means your application must do it, there is no way around it. If you ask for an DayEvent and you get back something totally different, what do you do?

The rigidness in most NoSql (assuming some form of document store like MongoDB) comes from its inability to combine data in new ways in a performant manner (joins). This is what SQL excels at. That implies you need to design your data in exactly the way it is going to be consumed, because you can't easily recombine the pieces in different ways as you iterate your application. Generally you must know your data access patterns in advance to create a well behaved NoSql database. Changes are hard. This is rigid.

Thus, it actually makes more sense to go from sql to a nosql, as you gain experience and discover the data access patterns. The advantage of nosql is not flexibility, that is actually its disadvantage! The advantage is rather its horizontal scalability. However, a decent sql server with competently designed schema will go a very long way.


I think you have a very thoughtful take but believe it's a mistake to think of 'NoSQL' as a monolithic category..

There's a very wide spectrum from having an evolvable document oriented data model with evolvable strongly consistent secondary indexes, transactions, aggregations, and joins to simplistic key/value stores like DynamoDB and Cassandra that do force you into a very much waterfall posture that I think you are spot on in pointing out.


Because events are related to users and they both are related to timezones and events can be related to each other. MongoDB is really good for storing big blobs of data you want to retrieve quickly, with some basic search and index, but it's awful at relations between data.


Ah I see what you mean. That makes sense!


Permission boundaries


Savings plans, spot, EDP discounts. Some of these have to be applied, right?


At this level they can just go bare metal or colo. Use Hetzner's pricing as reference. Logs don't need the same level of durability as user data, some level of failure is perfectly fine. I would estimate 100k per month or less, maximum 200K.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: