Hacker Newsnew | past | comments | ask | show | jobs | submit | more davidism's commentslogin

This is being talked about now with [PEP 411 - Provisional packages in the Python standard library][1].

[1]: https://www.python.org/dev/peps/pep-0411/


> A GET query is always a URI. Anyone can link to it.

Except this isn't really useful outside of bookmarks in a browser. Who would bookmark an API endpoint returning JSON? In code, it's just as easy to make a request with query parameters as it is with a post body.

    #!/usr/bin/env python
    import requests
    requests.get('http://example.com/', params={'key': 'value'})
    requests.post('http://example.com/', data={'key': 'value'})


Which endpoint (or resource locator) you hit probably shouldn't dictate the representation type you get back. That's what the `Accept:` headers should be used for. If you hit it with a browser, you'd expect to get back some html version of the same resource.

If you use "Accept:application/json", then you should expect to get JSON back. Etc.


A fair point. I was referring to the api being separate from the frontend, not what the api is returning. But I could see someone designing their application the way you describe.


And when someone does design their application in that manner, the various benefits of doing things 'right' start to pay off.

If a resource or result is addressable it means that a 3rd party can build an API that integrates with my API and link straight to results of certain queries.

Granted that this would be hard in the context of the dropbox API, because they already 'break' a lot other rules.


Yes, if reusable/sharable queries are something you want, then you don't really have a choice but to expose an API for creating and retrieving them. Not sure if you're trying to make some other point related to the `Accept:` header the parent comment was about.


> this isn't really useful outside of bookmarks in a browser.

a GET query's "always-URI" status is useful to caching proxies. so going against the protocol could mean you have additional work to do configuring your proxies.


Especially when a good majority of APIs require you to pass something like an OAuth token in the header, ruining the bookmarkibility completely.


Also, the behavior of a GET can change based on the headers. (think: basic auth.) So no, it can't be purely linked to.


I think, if an API is well-designed, you should strive to not let the resource change depending on the authorization headers.

Ideally, they should only make a difference in the fact that access is granted, or not (401 for bad authentication, 403 if you're simply not allowed).

This is not possible everywhere, but it's definitely something to try to aim for. If you can't, you can still use facilities such as the Vary header to indicate that the authorization header alters the result.


I'm just pointing out that GET requests and linkable URLs are very different things. Even consider Accept headers—it's perfectly valid to respond with different content if the request wants it.


We recognize that it's gotten a bit out of control lately, hence the topic during the meeting. I think we ended up with some good guidelines for managing it.


Then edit your question to acknowledge the "duplicate" and clearly explain why your question is different. It will automatically go into a Reopen Queue where users will evaluate it and vote to reopen. The system's not perfect, so if it still doesn't get reopened, open a discussion on meta.


Most comments should end up as an edit to the question or answer, clarifying some point, at which point they can be removed. The Question and Answer are the important things on Stack Overflow, not the communication that went into creating and tuning them.

The remaining comments are just fluff, such as "thanks", which can more appropriately (for the site) be expressed as an upvote or accept. Or they're asking a new question, in which case the parent's comment applies.


This culture doesn't really invite towards answers improving over the years as more and more people who might know more happen to come across them.


Interesting how this is the case, because that was one of the key goals of the site when they were first building it.

I listened to Joel and Jeff talk about it in their early podcasts when they were first building the site. To be the definitive answer for a question requires that the answer be able to change and evolve over time as new information becomes available.


That's what the edit link on answers is for.


The answer was not closed, the question was. This is not a bad thing, as it funnels everything towards one canonical location.


They don't have much value to Stack Overflow, obviously, otherwise they would have remained opened and been voted up on Stack Overflow.


I think the point is these often are upvoted questions[1] with thought out answers. In fairness, I would argue the question I just referenced doesn't "belong" on SO as it's pretty clearly based in opinion. But if I Google "simple django rich text editor" and this is the first result, I would argue it does bring value to SO by bringing me onto the site.

[1] http://stackoverflow.com/questions/4674609/looking-for-a-ric... (not my find, posted in another comment on this thread)


Yup.

Having strict moderation standards is SO's business. I may or may not agree, but it seems to work for them.

Leaving closed questions, particularly those without answers, on their site sucking up google juice and deceptively appearing in my search results is scummy. If they don't want it on their site, they should ask google not to crawl it. And that goes squared when the question doesn't have an answer.


Honest mistake on my part, thanks for linking it.


That warning is old. Most popular Flask extensions work well with Python 3 now. I've recently written a large Flask application targeted at Python 3 with no issues. If you have problems with Python 3 compatibility, report the issue, the maintainers are good about fixing these things.


To solve this, I have a custom Celery instance that wraps each task in my flask app's context. So you can treat celery tasks as just another request.


Yeap, that's what i did as well, and i think that's the documented way ( using a flask test request context i think). But having to do this let me understand a lot more about sqlalchemy session, flushing and object states, in a multithreaded environment. I recommend doing that for an in-depth understanding.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: