Hacker Newsnew | past | comments | ask | show | jobs | submit | narrowtux's commentslogin

> The typical way to do schema migration is to compose a list of ALTER TABLE statements. This becomes hard to track the latest schema state as the migration accumulates. It's more intuitive for the developers to specify the desired state. Ideally, PostgreSQL could allow developers to specify the desired CREATE TABLE schema, the engine then reconcile it with the latest schema, figure out the diff, and plan the migration path internally.

This is a terrible idea. The behaviour of engines that do this is unpredictable and suddenly you lose data because the engine deleted a column.


If a tool blindly drops columns, that's just a bad tool! It doesn't mean the concept is flawed.

Thousands of companies successfully use declarative schema management. Google and Facebook are two examples at a large scale, but it's equally beneficial at smaller scales too. As long as the workflow has sufficient guardrails, it's safe and it speeds up development time.

Some companies use it to auto-generate migrations (which are then reviewed/edited), while others use a fully declarative flow (no "migrations", but automated guardrails and human review).

I'm the author of Skeema (https://github.com/skeema/skeema) which has provided declarative flow for MySQL and MariaDB since 2016. Hundreds of companies use it, including GitHub, SendGrid, Cash App, Wix, Etsy, and many others you have likely heard of. Safety is the primary consideration throughout all of Skeema's design: https://www.skeema.io/docs/features/safety/

Meanwhile a few declarative solutions that support Postgres include sqldef, Migra, Tusker (which builds on Migra), and Atlas.


This is exactly how auto migrations in things like Django and Prisma work. Yes you need to check the changes before you apply them but most of the time they are entirely sensible.


Wouldn't it be more interesting to have this as an external tool?

You input your create table statement and it issues you back the migration statements? Then you can check it against your development database or whatever and if you feel fine use it?

This way you could check and modify the migration path without writing the alter statements.

This is one of the most frustrating thinks with sqlite for me. Changing a table doesn't always work with an alter statement but sometimes you need to drop and recreate it with the new columns. Why can't they do the magic for me. It's really frustrating and was often enough the sole reason I used postgres for private projects.


There are actually a bunch of external tools that offer declarative schema management flow. Personally I agree that that the overall flow should be external to the DB, but it would be useful if databases could offer more built-in functionality in this area to make these tools easier to write.

For sqlite in particular, check out https://david.rothlis.net/declarative-schema-migration-for-s... and https://sqlite-utils.datasette.io/en/stable/python-api.html#...


We do exactly this, but with our own "ground truth" schema in the form of an XML file defining the various tables, views, triggers and so on.

We then have a program which compares the latest schema from XML to a given database, and performs a series of CREATE, ALTER and so on to update the database so it conforms.

Since we've written it ourselves we have full control over what it does and how it does it, for example it never issues DROP on non-empty tables/columns or similar destructive actions.

We've had it for a long time now and it's worked very well for us, allowing for painless autonomous upgrades of our customers on-prem databases.


I find this interesting! Is there anything similar in the open source world? I have built projects with Supabase in the past and one of my gripes about it is that it becomes obnoxious to track what happens to the schema over time in version control with dozens of migration files. For example, you have no idea what the current state of a table or trigger is because there might be several updates that occured over four dozen migration files and you may not even be sure which files those changes are in. You have to just look at the database directly rather than the code.


I haven't stumbled over anything, but I wouldn't be surprised if it exists. It's not magic, my colleague wrote the first version in a few days, and we've iterated on it since.

It just requires that there are some system views or similar that you can use to extract the current database schema, so you have something to compare against.

Our tool goes through the XML file and for each table runs a query to find the current columns, and for each column find the current configuration. Then compare with the columns in the XML file and decide what to do for each, ALTER, DROP or ignore (because possible data loss) etc. Datatype changed from "int" to "varchar(50)"? Not a problem since 50 chars are enough to store the largest possible int, so issue ALTER TABLE. Column no longer present? Check if existing column has any data, if not we can safely DROP the column, otherwise keep it and issue warning.

Views, triggers and stored procs are replaced if different. We minimize logic in the database, so our triggers and stored procs are few and minimal.

Materialized views require a bit of extra handling with the database we use, in that we can't alter but have to drop and recreate. So we need to keep track of this.

As you say it's very nice to use as a developer, as you only have to care about what the database should look like at the end of the day, not how it got there. Especially since almost all of our customers skip some versions (we release monthly).


It’s not inherently a bad idea, though. I do agree it would likely be poorly implemented.


Why would you think so? The PostgreSQL developers didn't stick out by delivering bad software/features so far. If they would implement this I would actually expect a pretty good way with even better documentation.


The creation of the backing migration plan would have to be interactive, certainly. But that doesn't mean the concept as a whole is destined for failure.


I agree that this is the ideal solution, but you can't exactly retrofit a place you rent with completely new plumbing.


I've lived in rental apartments where the drain pan has a pipe leading to the existing water dump hole that the washer drains into


Think about having one plane (the entire glass) pressing 2 microswitches. Debouncing one switch is hard enough, now you've got to debounce 2, and detect if the user was doing a really fast double click, or just pressed in the center of the mouse.


Debouncing buttons is a solved problem at all. Thinking that Apple is so good that they put the charging port out of a reason to the bottom but aren't able to debouce two buttons is silly.


The problem Apple created for themselves is staying true to their 'one mouse button' philosophy for too long and literally.


macOS has both. if you keep dragging after opening the menu, the mouseup will activate the menu item.


I still want some app that makes the awesome brouter algorithm usable on iOS. They have an android app which is "easy" to do since the routing module is written in Java, but alas, iOS doesn't support Java.

IMO, brouter generates the best routes for bicycling with lots of options to customize for what kind of riding you want to do (road, trekking, gravel, recreational, commuting, safe and quiet vs. quick, etc.).

Right now I use brouter-web on my iPhone but it's really hard to use since the UI is really small and it's very simple to accidentally place a waypoint. After that, I send the GPX file to my bike computer.


Inspecting the app's content or the network traffic.


A video mixer a-la Blackmagic ATEM Mini will do this. Maybe there are cheaper options if it's just being used for input switching.


That will introduce a lot of delay - they make synchronous cuts by frame buffering every input.


Then I'm afraid it's not possible. The reason the input switch takes so long is because of the HDID negotiation.

A video mixer acts as the source and sink for the output and the inputs respectively, where a HDMI switch will just physically disconnect and connect the output port to a different input port, meaning the HDID negotiation has to be re-done every time.


It still seems the right hardware, upstream or downstream depending on which direction you're viewing things from, could keep the connection alive with the HDID pre-negotiated. Like, assume the hardware doesn't change between switching. If you're the TV's firmware and have the CPU power to, why renegotiate when switching inputs. Solve for the default case and have it standing by and running hot. it's not like the panel is going to change in the interim.


FWIW, I don't care if the screen briefly flickers during switching.


You just need to strip the hdcp on the source.


You can't tell me they are on the same level. I'm infuriated just reading all the issues people have with this experian service.


There are lots of usecases for apps that don't need these advanced features, and just want a nice backdrop for their geo data.

The current offerings of google, mapbox, bing, etc are prohibitively expensive when you just need a small subset of their features.

The company I work for is a happy customer of maptiler.com which even offers routing and reverse geocoding. It might not be as great and up to date as google's, but it's more than enough for our apps.


In this case you should just use Leaflet and OpenStreetMap for a grand total of $0 per month.


OpenStreetMap tile servers aren't really meant to be used for this. They do allow some use, but are also subject to throttling if your application puts too much load on their servers.

> However, OpenStreetMap’s own servers are run entirely on donated resources. They have strictly limited capacity. Heavy use of OSM tiles adversely affects people’s ability to edit the map, and is an abuse of the individual donations and sponsorship which provide hardware and bandwidth. As a result, we require that users of the tiles abide by this tile usage policy.

https://operations.osmfoundation.org/policies/tiles/


Just so you know, I make it a point to report to OSM maintainers any relatively sizeable app using their tile servers, just so they can be banned.

The default OSM tile server is not here to serve your startup.


That is usually out of the question, because raster tiles don't allow you to customize the map (colors, which features to display at which zoom levels, etc). I find openstreetmap way too busy as a backdrop for data.

As I've said, the solutions are there, and there is an actual demand for cheap vector tile hosting without all of the other cruft that is bundled in the commercial solutions but isn't actually needed.


> I find openstreetmap way too busy as a backdrop for data.

Fortunately there’s a large selection of background layers for leaflet: http://leaflet-extras.github.io/leaflet-providers/preview/ I quite like CartoDB Positron for as a muted background.


Thanks, but I'm really happy with our vector tileset solution, and don't want to downgrade to raster tiles. They are pretty slow compared to vector tiles with mapbox/maplibre gl.


Nobody is talking about taking away cars from people who live outside of cities. Yet this argument always comes up when problems with cars in densely populated areas are discussed.


This probably occurs because people and governments love one size fits all solutions. There’s a good chance any rules about cars made in the city will spill over and cause issues for people in the country, it happens all the time in regards to other things so why not this?


Open your eyes and look at the "one size fits all" solution that we have today. Cars upon cars upon cars.

Alternatives such as public transport, walking and cycling have always been gimped in favour of car traffic, lest the car driving populace complain.


That's because the tone of "problems with cars in densely populated areas" and those discussing them tend to completely ignore the significant part of the population that the argument is redundant for.


They don't ignore that part of the population. It's just exhausting to always have to equivocate when it is the obvious.

Yes, obviously, cars are a necessity in more remote places. Nobody's taking away your cars.

Maybe this needs a shorthand. Just preface every discussion with "*not about you, fur trapper!"


Yes, talking about adding bike lanes in cities and improving public transportation should really infuriate the rural demographic that continually think everything is about them and their way of life. These ideas must all come from the cult of anti-car that wants to come into every tiny city and change everything overnight apparently. It’s typical status quo paranoia that always tries to halt any progress in these areas.


the rural demographic that continually think everything is about them and their way of life

Rubbish, these articles are nearly always written by and supported by the dominant voice in the mainstream media - young, liberal city-dwellers, who seem to have no concept that there is a huge part of the population who are funding their "progress", often at the expense of their own infrastructure and quality of life.


I mean, sure, that is in fact the ridiculously paranoid narrative that keeps that argument afloat. Those damn city slickers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: