You didn't click on the link I shared. I'm talking about the cost to produce the response, not the request. One AI prompt uses around 10 times more CPU and energy than a Google search.
If ChatGPT handles 1 billion queries a day, that's like the energy cost of 10 billion Google searches every single day.
Someone has to pay the electricity bill. We all know it's not free like you claim.
you also didn't click on the link the poster you replied to shared...
seconding openrouter and fal, having to muck around with idiosyncrasies of each vendor just to try their "bestest model" and find out it does not satisfy your requirements is a chore.
I'd stick with Google Search until Microsoft figures out how to handle a billion OpenAI requests a day without draining the water supply of entire cities. Because in Chile, for example, people are struggling.
Sorry, but I'm not interested in blog posts from lobbyist in Washington, the same place pushing to build mega datacentres with Nvidia servers in developing countries.
Also, Andy's blog post doesn't mention infrastructure-scale impacts. Even small actions add up, and as AI scales exponentially, so does the demand on energy and water. That part gets left out.
I'll stick with the research papers published by AI researchers [1] and investigative journalists [2], but thanks for sharing your link, it gives me a good idea of what lobbyists in Washington aren't saying.
Not sure if you actually read the article but infrastructural impact is clearly discussed.
You sent over two links about the environmental impact of data centres. There is no denial that these are burdensome on the environment; the question is to what degree AI and its applications contribute to that effect. If you wanted to argue in good faith you'd be advocating for everyone to stop watching Netflix, because video streaming generates a far greater demand than AI currently does, but I don't see you doing this.
I have always considered React one of the frameworks that allows you to interop with regular HTML and JavaScript the easiest. The reason being that you can easily access DOM nodes and everything is run in JavaScript anyway. This is a lot more complicated in languages where you have a template language in my own experience.
That said, document.write is kind of special since it requires JavaScript to run while the HTML document is still open (so before it fires the loaded event). There is a reason this API is barely used today and mostly superset by appendChild.
That would happen when, for example, you're saving the value of the text input in redux, and the input is then updated with that value on change, although it's also updated by the user typing and the thread is blocked at the same time, thus we have a race condition.
That said — it only happens when one over-engineers stuff. Make sure to have a single source of truth, and that will be avoided.
The best approach is to always updated the input synchronously in React (in the same tick when the event is handled). If you do it in another tick, you will always have to handle race conditions. This is a problem for all input elements though, not specific to React or even the Web.
The JavaScript Fatigue argument is not good. There's simply no data that backs it and nobody is forced to use new libraries only because they use JavaScript.
I've seen third party dependencies churn on Elixir as well (packages that are no longer maintained or alternatives that are better) - I think it's an inherent problem with using dependencies and has nothing to do with the programming language in which those dependencies are written.
> As a developer I just want to get on with my work, not have to read another Hackernoon post on how everything from last week is obsolete because XYZ framework
My recommendation is that you don't read Hackernoon. This seems like a very ineffective way to level up your developer skills.
Edit: I agree that Elixir is very nice and would pick it over JavaScript for backend heavy applications without thinking. I just don't think this argument makes any sense in that context.
> nobody is forced to use new libraries only because they use JavaScript.
It's not completely true IMO for 2 reasons:
1- the nodejs standard lib is quite poor compared to say, Java's, Scala's or python's, so you generally need quite a lot of modules to do anything
2- the npm ecosystem is much more amateur. To do anything you have a ton of poorly supported by hobbyists or not supported at all modules. This can force you to change modules/libs regularly. This is to be compared to the Java ecosystem for example, were more people are working together to build well supported/high quality libs (Apache libraries for example)
Exactly, it is impossible to use a stable long term Linux distribution and node, most packages force you to get the latest or before latest version of node.
Other issue is that things move fast and break, you are not sure that 3 months old tutorial will work in present.
Edit: I know I can and I did grabbed node and npm outside the repositories, but you do not see this issue with other languages where I must install latest stuff to get most libraries working.
Yeah but on Node.js, Express has been the de facto framework of choice for building REST APIs for over 6 years. The JS fatigue phenomenon was mostly on the front end, and even that has basically settled down as people have rallied around React, Angular and Vue.
I’m not calling out JS or elixir communities by any means here. I just want to mention that a big part of the “JavaScript Fatigue” is the amount of overlapping libraries that all do the same thing and it isn’t always straightforward to figure out which one is better - and this happens with so many packages it would be impossible to do it with all of them. Just look up “isArray” (which I wish where just not showing up in libs at this point. It’s a built in in both node and the browser now) and you get 102 packages that are very similar but will clearly vary in implementation and quality.
Where as something like the Python community or Rust community (where I have had more experience) I have always found that even if there are packages that do the same thing, there usually aren’t nearly as many duplicates and often times the community has done a better job communicating the value of many of the packages. There is just less confusion around the whole thing
I have also found there to be relatively little overlap between the big packages, in my experience
It's no more difficult than choosing an app in the app store. You go on NPM, you look at the download count, you look at the feature set, you install and then move on with your day. JS is a vast and very open ecosystem so duplication is inevitable.
There is no connection, at all, between a language being interpreted and it being error-prone to write or run. You either mean something else or are mistaken.
I'm curious what frontend problems you have solved to be able to know that this is ridiculous and have to start a personal assault against the author of this comment?
they seem to be focusable with tab navigation, but there is no indication (dashed outline) of which element is selected. I was able to tab through and toggle the switches with the spacebar in Firefox.