Hacker Newsnew | past | comments | ask | show | jobs | submit | creesch's commentslogin

Don't forget the tradition of having to migrate to a new API after a while because this one gets deprecated for "reasons". Not just a newer version, but a complete non backwards compatible new API that also requires its own setup.

To be fair, that might have changed in recent years. But after having to deal with that a few times for a few hobby projects I simply stopped trying. Can't imagine how it is for companies making use of these APIs. I guess it provides work for teams on otherwise stable applications...


> One could argue whether Phones with the Google android were ever really open.

In recent years, you can argue that android has no longer been open. In the early years of Android that argument would be much harder to make. To be clear, I am not talking hardcore FOSS libre open. But meaningfully open for the end user to do what they want on their device without much restriction. Early android didn't have sandboxing, had no permission system, was easy to root, etc.

Certainly with Nexus devices you had pretty much the freedom to what you wanted.

Could it have been more open? Sure, but I feel like it is almost disingenuous to say it was never if we are comparing it to the real world situation we find ourselves in today.


Early android did have sandboxing and a permission system. It's just that you had to accept all permissions on app install. (Which is still a lot better than common practice on the contemporary desktop.)

That didn't make the system less open though. The user gets to make an informed (or not) choice.

What was different is that the Play store back then was basically a free-for-all. There was no meaningful approval process. This did contribute to making the system as a whole more open, but at a cost...


I am surprised nobody mentioned the curse of knowledge: https://en.wikipedia.org/wiki/Curse_of_knowledge

It is actually a fairly well known phenomenon, certainly in educational circles. Being aware of it when you are writing any form of documentation is a first step. But even then, it is very difficult to properly assess the knowledge entry level of your audience.

Having others read through your documentation and importantly work with your documentation is a good strategy.

One thing I can also highly recommend is simply start out with a list of assumed prerequisite knowledge in your intro. Specifically things like certain environments, frameworks, etc. Bonus points for not only listing those but also linking to the documentation for those.


I am not sure if you thought through the implications of your proposal. LLMs are trained on examples in the training material. If something is new and isn't accessible because it lacks tangible examples the adoption rate will be lower, so there will be less training material and therefore LLMs will not be of use here.

In fact, that entire aspect of LLMs is something that is not talked about as often. But is worth a whole discussion in itself. If I remember correctly, the availability of training material for a technology already has slightly impacted more niche corners of the tech world.


Software should still come with a documentation that LLMs can train on, plus they have all the learnings from interactions with developers asking about it - who will more and more just go this route (and following whatever guidance they get) and not thinking of searching for other material, let alone write guides for others. I'm not saying this is all that good, but that's the reasonable outcome.


Given it has been a few days it might be unlikely that you read it. But I figured I'd reply anyway in case you do.

I mean this with no hostile intend, but have you honestly stopped and thought about what you did type down here?

What I mean by that is, have you looked at the complete picture to see if what you are saying makes sense in relation to what you initially said.

You questioned the need for documentation. Now you are saying there needs to be good documentation for LLMs to train on. Good documentation for LLMs to train on is actually much more extensive than than the documentation written for humans to begin with. So, you are effectively saying there needs to be more documentation.

Secondly, how can developers ask about something when they don't have decent documentation to start with.


There are some services that notify on replies :)

Despite your intent your comment is kinda meanly worded, and it is perhaps you who did not read it, or at least mix up terms. In my parent comment I did not mention documentation, but tutorials, as in guide articles like devs not associated with a project writing about how to achieve some goal.

To be more specific, I think there are two distinct type of text that gets written about a project during in its lifecycle, in parallel:

#1 A documentation, written by the maintainers. This will always contain all functions, methods, API endpoints, components, whatever. It's the complete description to the full extent of features. It may or may not also contain the second type. An LLM can theoretically interpret and use the whole project based on this info.

#2 Guides, tutorials, reviews, forum posts, describing or giving tips on the whole project or specific features, or describing methods using that project ("Use x and y to process queries 20x faster on z!"). These writings were essential for the spreading and marketing of a mentioned project. I think this is what the OP article was about. My argument was that these would not be seeked anymore, devs would just ask LLMs "how to achieve x with this tool".


> LLM can theoretically interpret and use the whole project based on this info.

That's the thing though, LLMs really can't. At least not to a degree that they are able to act on it at a same level as when trained on everything else including tutorials and such.

Languages and technologies that LLMs excel at are those that are widely spread with numerous examples.

Just plain documentation with just the api calls isn't enough to train a LLM on. They effectively learn from example.

So with just #1 and no longer #1 aimed at humans you will never get to a point where you can ask an LLM about the technology.

This is what prompted me to remark that I feel you haven't thought this through. Which you might have, but that makes me think you have a overly optimistic view of what data is enough to reliably train LLMs on.

Again, to stress the point, just documentation isn't enough. So you really do need humans adapting the technology first, widening the base of examples to train on.


> I don't think they're teaching Java much in university or boot camps anymore so it doesn't matter much anyway

That might just be the bubble you are in. Java is still one of the biggest languages used in corporations across the globes for anything backend related. If it is because it is a modern COBOL or because it actually is a stable language with a solid ecosystem might be a matter of some debate.

In the circles I navigate it is still heavily featured in various bootcamps.


> Does the JS ecosystem really move so fast that you can’t wait a month or two before updating your packages?

Really depends on the context and where the code is being used. As others have pointed out most js packages will use semantic versioning. For the patch releases (the last of the three numbers), for code that is exposed to the outside world you generally want to apply those rather quickly. As those will contain hotfixes including those fixing CVEs.

For the major and minor releases it really depends on what sort of dependencies you are using and how stable they are.

The issue isn't really unique to the JavaScript eco system either. A bigger java project (certainly with a lot of spring related dependencies) will also see a lot of movement.

That isn't to say that some tropes about the JavaScript ecosystem being extremely volatile aren't entirely true. But in this case I do think the context is the bigger difference.

> then again, we make client side applications with essentially no networking, so security isn’t as critical for us, stability is much more important)

By its nature, most JavaScript will be network connected in some fashion in environments with plenty of bad actors.


To state the obvious, one ends with "help" on with "com". It effectively is phishing awareness 101 that domains need to match.

You still don't know then of course. When in doubt you shouldn't do the action that is asked through clicking on links in the mail. Instead go to the domain you know to be legit and execute the action there.

Having said all that, even the most aware people are only human. So it is always possible to overlook a detail like that.


Corollary: dont click on any emails links. (Most use some dumb domain name that could be phishing)


There are many sites, which provide ONLY links, eg. with token in URL. What with those?


This is the problem. Those need to be very carefully clicked.

The whole web is a darn mess! I have no ideas for solutions.


I mean sure, but it is based on an actual MIT study. Which they are clearly trying to spin in their favor, but real data nonetheless.


> based on an actual MIT study

Speaking of which, anyone have a link to the study itself? Since the linked marketing post doesn't even provide a link to it, just a bunch of links to their own platform and social media accounts.

Probably better for the submission to link directly to the study, without the extra call-to-action fluff at the end?


It was mentioned and discussed in a fair amount of other articles and blog posts on HN as well. So it was fairly easy to find: https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Bus...


i found this analysis interesting - https://www.youtube.com/watch?v=5QzqyrnL010


specifically, a study done by the NANDA group at MIT

https://nanda.media.mit.edu/

they seem to be pushing their own take on how to implement agentic AI

frankly this study and the headlines surrounding it all seem like a bunch of marketing spin


> an incredibly detailed answer and a believable range.

Recently, it started returning even more verbose answers. The absolute bullshit research paper that Google Gemini gives you is what turned me away from using it there. Now, chatGPT also seems to go for more verbose filler rather than actually information. It is not as bad as Gemini, but I did notice.

It makes me wonder if people think the results are more credible with verbose reports like that. Even if it actually obfuscates the information you asked it to track down to begin with.

I do like how you worded it as a believable range, rather than an accurate one. One of the things that makes me hesitant to use deep research for anything but low impact non-critical stuff is exactly that. Some answers are easier to verify than others, but the way the sources are presented, it isn't always easy to verify the answers.

Another aspect is that my own skills in researching things are pretty good, if I may so myself. I don't want them to atrophy, which easily happens with lazy use of LLMs where they do all the hard work.

A final consideration came from me experimenting with MCPs and my own attempts at creating a deep research I could use with any model. No matter the approach I tried, it is extremely heavy on resources and will burn through tokens like no other.

Economically, it just doesn't make sense for me to run against APIs. Which in my mind means it is heavily subsidized by openAI as a sort of loss-leader. Something I don't want to depend on just to find myself facing a price hike in the future.


> FOSS software seems aligned with the spirit, if not the letter, of the licenses.

Yeah, only if you look at permissive licenses like MIT and Apache, it most certainly doesn't follow the spirit of other licenses.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: