Hacker Newsnew | past | comments | ask | show | jobs | submit | more ExtremisAndy's commentslogin

I wanted to like React so bad. I tried. I loved JSX and really appreciated the way you could reuse components. That was all very cool, and I was initially quite enthusiastic about it. But for whatever reason, I just could not figure out state management/hooks. Drove me crazy. It just always felt so unnecessarily complicated compared to other languages and frameworks I've used (even vanilla JS). Now, don't get me wrong: I fully accept the blame here. I am mostly self-taught (and not even the best 'student' in that context, haha) so I'm sure I just lack the overall knowledge base/big-picture understanding to really appreciate what they've done here. I hope I'll give it another shot one day, and perhaps with fresh eyes (and maybe with the help of a patient tutor), it will all 'click'!


Hooks are IMO one of the greatest innovations in UI programming.

I'll draw an analogy.

---

Why did async/await replace callbacks? They are really the same thing, aren't they? Aren't callbacks good enough?

Async/await allowed programmers to write asynchronous code as if it were synchronous.

---

Why did hooks replace classes/imperative? They are really the same thing, aren't they? Aren't classes/imperative good enough?

Hooks allowed programmers to write stateful code as if it were stateless.

    function MyComponent() {
      const [value, setValue] = useState(false);

      const buttonClass = value ? "on" : "off";

      return (
        <button
          className={buttonClass}
          onClick={() => setValue(!value)}
        >
          {String(value)}
        </button>
      );
    }
This is functional code, without scary, hard-to-reason side-effects. Except for one limited part, which operates in a stateful way.

I argue that this is simpler than the equivalent imperative-DOM modification code, and the gap between these only increases with real programs.

This isn't just a weird "web bro" thing; the hooks paradigm is useful (in theory and practice) to every UI platform.

I even wrote an implementation (closed source) for hooks in Angular. It was lovely.


This isn't "functional code" in the sense of functional programming.

It is some pseudo DSL with hidden hard to reason mutable state.


Of course it's not functional code. Just as

    let result;
    for (const item of items) {
      result = await foo(item);
      if (result) {
        break;
      }
    }
is not synchronous code. But you can write it very close to the mental model of synchronous programming.

The same is true of hooks. The code is not stateless, but you can write it very close to the mental model of stateless programming.

That is why -- despite them being optional -- hooks have become incredibly popular.


This is the best explanation for both async/await and hooks that I've ever heard, thank you!

It also shows that although there might be some learning investment required to use the more modern version of both, it's well worth the effort. I can't imagine going back to callback-riddled javascript


what do you mean async/await "replaced callbacks"? Async/await "replaced" Promises, specifically then/catch chaining. Turns out Promises can be syntactically-sugared to imperative try/catch blocks.


Ah but if I said "async/await replaced Promises", a pedantic HN poster would correct me :/ "No it didn't replace them"


I’ll be more pedantic than that! Await is more akin to yield, the fact that it’s specific to promise resolution is a specialization of the more general function body suspension used by generators, with an extra specialization for event loop scheduling. And async just means you’re returning a promise no matter how your function exits.


hah, good point. Just wanted to point out that callbacks and async/await are different things, and callbacks are still the preferred way to customize behavior.


Hooks use a different paradigm, which is a solution to a problem you encounter when doing functional programming, which is persistence and side effects. The whole of React is to build a tree of objects that will be rendered into a HTML page. The previous class-based architecture made it easy to store state and add side effects to this tree, by using properties and methods. But pure functional programming makes these awkward as functions are transient, only transforming parameters into return values.

I don't know exactly how – never had the time to properly research it – but, my current guess is that the hooks code taps into the scheduler/dispatcher – which handles the execution of the function representing the component – and stores value and logic somewhere. It uses the order of these calls as keys – you don't have to specify them – and provides the stored values as return values of the call of the hooks functions.

Hooks are escape hatches from the functional paradigm of React's components. You are always providing new values, and it decides when to store the updated version – mostly based on the deps array. You then get back what you stored. On the surface, it's still basically functional, but the actual logic is not. It's more like a repository of code and values.


One view of Hooks is that they are monadic or at least Monad-like and the deps arrays are a crude (reversed) notation for algebraic side effects. ("Reversed" because they declare which dependencies have side effects more than they declare which side effects the hooks themselves produce.)

It's still so very functional programming-inspired, even if the execution engine (scheduler/dispatcher) isn't that much like the Monad runtimes of most functional programming languages and the various Hook "monads" don't get captured in even the return type of functions (much less the parameter types) and get elided away. (It could be more "traditionally JS monadic" if it [ab]used async/await syntax and needed some fancy return type, even though the concepts for hooks don't involve Promises [or Futures, being the slightly more common FP Monad name]. Though also, from Typescript patch notes, I've heard React is exploring [ab]using Promises for something like that in the near-ish future.)

Monadic bindings needing to be in-order, just the like "Hook rules", isn't even that strange from an FP perspective: there can be a big difference in which of two Promises is awaited first. There can be a big difference in which IO binding executes first (you don't want to input something before the prompt of what to input is printed).


All hooks are based on useReducer. Whenever a component is rendered React sets a global (!) variable for the current component. They didn't even bother to have the component be a parameter into the hook to get rid of the global variable. No, it must be pretentious and claim to be something it isn't. The global variable stores a linked list with the hook data. The calling order of your hooks matters.


I find the hooks so much easier to use than e.g. Google's ViewModel solution in Jetpack. They are just a joy to use.


Don't blame yourself. Software engineering tools are supposed to serve the programmer, not the other way around. If a tool is too hard to use, then you're using the wrong tool. Tools should serve the developer. I would recommend plain old javascript with a light wrapper on top, like jquery. I know i'll get downvoted for this but a lot of "modern" javascript frameworks don't properly abstract the underlying layer properly (read Joel spolsky's article on leaky abstractions) and this results in a lot of problems. Furthermore, they optimize for the wrong thing: writing code. Most engineers spend 95% of their day reading code and that's a lot harder than writing code.


it's true, React, Angular and other popular heavy frameworks aren't designed to serve the programmer, they're designed to serve the organization that owns the codebase by making turnover easier. They force a specific approach and layout new hires can be familiar with, regardless of if it's even a good idea for the application at hand.


With all due respect, this sounds like a "old dog"/"new tricks" scenario.

"What's this .map() nonsense? The indexed for-loop always made sense to me."


There are certainly benefits from doing things in new ways, but a lot of the time things swing too far in the opposite direction, and become of a form of “IQ signalling” for smart developers.

It isn’t only front-end developers who are prone to this: 20 years ago, before FP became a mainstream thing, being able to do pointer arithmetic was seen as the differentiator between a “smart programmer” and a “bad programmer” (https://www.joelonsoftware.com/2006/10/25/the-guerrilla-guid...).

Was React great and fun to work with? Yes, but then Redux came along and made something simple into something stilted. Our team used Mobx instead, which was simpler IIRC.

Similarly it’s been a couple of years, but I remember React Hooks being a bit complicated for the value we got out of them.

Ultimately, it’s hard to tell if something is an improvement, or a time suck to allow the elites to stand out. I stopped playing the game and stepped away from front-end development, as have many of my experienced colleagues.

I’m assuming things have settled as Big Tech focusses on profitability rather than funding an endless parade of frameworks, and the IQ-signalling will become less prevalent as generative AI competes with the brainiacs. When that happens, I’ll dip my toes back in the world of front-end development.


Redux has never seemed like a great solution for the programs I've written.

Though FWIW, MobX is far more magical than hooks.


well, those new tricks are going to bite you, later on down the road.


Hooks are half of an object/class system (implemented on top of a language that already had a whole one—arguably, two, from a DX perspective, though they're one under-the-hood) with non-standard and hard-to-read declaration syntax and bizarre behavior (FIFO-by-declaration-order property access and method invocation). It's not your fault you're having trouble with them, they're a weird boondoggle.


view = func(state).

That’s the magic of react. The problem is no longer manually managing transitions between state but state itself.

That’s why state management is so critical in the react ecosystem.

Further, react has delegated two key features to userland: side effect and state management.

Personally, I don’t want react to solve either of these problems because it’ll just turn into the disaster that is react server components: feature development driven by corporate interests (eg vercel).


> view = func(state).

If this was actually how it worked, I'd have much less problem with react. What I mean, specifically, is that not all function dependencies are explicit. This makes it look like the state is passed into some function. And then the view is returned, depending only on state. In reality, half the state is passed in (as in the argument to a function call), and half comes out of some mysterious hook storage structure. If the state was really a function argument, you could easily inspect the whole state in a debugger. Instead of... I'm not sure how you'd actually do that.


This brings up other questions too. Is the state on the client? How does one reconcile the server state and the client state. Is the client state a subset of server state? Is the client state a tree matching the view tree? Does the state contain only the UI data to be rendered or is it all app state.


React Developer Tools makes it easy to inspect component state in your browser. https://react.dev/learn/react-developer-tools


> it’ll just turn into the disaster that is react server components: feature development driven by corporate interests (eg vercel)

if you've followed along with the react team's discussions on server components, it is very clear that vercel implemented the react team's vision and less so the other way around.

as someone who has been using them extensively, I understand why. they're delightful.

vercel does control next.js which is the only framework to completely implement RSC, but as other frameworks catch up, RSC will have a wider surface area.


I think that view = func(state) is an unnecessary complicated way to describe frontend development.


After years of react dev that is probably one of the wildest takes I’ve ever read. Kudos!


Hardly a wild take though, is it? You're in the context of "I am mostly self-taught (and not even the best 'student' in that context, haha) so I'm sure I just lack the overall knowledge base/big-picture understanding" and you come in and say view = func(state).

Here's how the React docs introduce it:

"React is a JavaScript library for rendering user interfaces (UI). UI is built from small units like buttons, text, and images. React lets you combine them into reusable, nestable components."

Hopefully you can appreciate the difference.


Hooks are complicated, but state/props is just right. Both enables you to enforce boundary and make contained components. If props and state are mixed / diluted, the component won't know when to do re-render and there'll be too many things to track, and there'll be many classic issues such as cyclic dependency and sluggish performance of two-way data binding.

Hopefully you'll click on it someday


1) React has no state management - approach it not from what it should be, but what it is - a tree of components that just render their props and where each component has some internal state.

2) Components have behavior. In Class based components they were implemented as methods. Simple enough, classes have methods. If you think about an iteration of rendering the ui tree, it goes down the tree and calls the mount method for each component in the ui. With hooks, instead of calling a method on an instance, it's implemented as an inner function of a component that is also a function instead of a class. Instead of explicitly calling the mount method on a class instance, calling a function that's a component automatically runs it's inner mount method. The way that it knows the which inner function is a lifecycle hook is because you import something like 'useEffect' from the react library. And that global useEffect function knows the current component because the renderer sets it as it's going down the ui tree. Hooks are better because they can be written as arrow functions (more concise, cleaner), and because you're calling a framework function, that function can have more complex behavior like how useEffect can track when to update because it's being passed in an instance of something like an internal state variable.


I've been using Preact signals, which are also usable in React (though admittedly I haven't tried to use them). I've found them to be much more pleasant than useState (though I reflexively continue to use useState). I also use a pretty lazy pattern where I allow large portions of an application to rerender when state changes (implicitly by letting prop changes percolate through). For simple apps this works really well, and for complex apps it still works. Like I have an app with 200Mb of rendered DOM nodes that get rerendered from the root when anything changes, and it's totally fine. I wouldn't ship that app widely, but for internal use it's 100% fine.


I’d be interested to know if NakedJSX works for you. I released it about 6 hours ago and I'm looking for feedback. It allows you to use JSX (without React) to generate static HTML, and also to create DOM nodes in client JavaScript, and without any of the client side ‘framework’ stuff of React. You have full control over what happens when.

https://nakedjsx.org/documentation/#getting-started

EDIT: clarity


I think there's a niche for libraries that allow you to update the DOM declaratively without any requirements on how you manage state. My understanding is that React looks at props / hooks in order to diff the state of your app and selectively recompute the virtual DOM. An alternative is to naively recompute the entire virtual DOM. I think this is how Mithril.js does it? It may be less efficient for large apps but it lets you manage state however you like.


I always use it while developing a new web app, but admit that I always switch it to something "fancier" in production, mainly because I find alerts rather ugly. But, goodness gracious, it would simplify so many things! I've just sort of assumed that default browser alerts were considered a big no-no these days, but that assumption comes from the fact that I simply don't see them being used much. I guess I should look into this more. Sure would shorten dev times!


No idea about Go, but I was curious how GPT-4 would handle a request to generate C code, so I asked it to help me write a header-only C string processing library with convenience functions like starts_with(), ends_with(), contains(), etc.) I told it every function must only work with String structs defined as:

struct String { char * text; long size; }

...or pointers to them. I then asked it to write tests for the functions it created. Everything... the functions and the tests... worked beautifully. I am not a professional programmer so I mainly use these LLMs for things other than code generation, but the little I've done has left me quite impressed! (Of course, not being a professional programmer no doubt makes me far easier to impress.)


Interesting. I haven’t tried it with C. Hopefully the training code for C is higher quality than any other language (because bad C kills). Do you have a GitHub with the output?


That's the key: they still trudge through it. I fully agree that those people are indeed amazing. In my experience, though, many complainers toss out every objection possible hoping to make the new project seem so difficult that it isn't worth doing (so everyone will just drop it). Those complainers are toxic and can kill progress. But yes, the ones who verbalize issues just to make sure everyone has a full understanding of a project and the challenges in completing it... but still have every intention to conquer those challenges and stick with it to the end... yes, give me those types of 'complainers' ANY day :)


From my point of view, we get a new project that I think will take 6 weeks because of X, Y Z. Management thinks it should take 2 days, so they say go ahead anyway. The project takes 8 weeks to finish most of the features and doing this effort forced us to drop every other thing we were already maintaining and working on. Everything is late, the targets set for the year miss as we spent 2 months on this other project and nobody is happy.


As an "old warhorse," I can tell you that I have been labeled a "complainer," because of such "complaints" as "Have you considered what happens when the user does X?" or "I tried pretty much the same thing, last year. It didn't turn out the way I wanted. Here's what happened..." or "Is that thread-safe?" or "Are you sure that will never be called on another thread? You do have a few network closures, here." or "Did you make sure that you let the connection go, after sending that instruction? It will result in power drain, if not."

etc.

Real killjoy.


Those types of complainers are certainly better than the types who don't actually do the tasks. But even better are those who do the tasks without burdening their coworkers with complaints.


Yeah, me too. I honestly rather enjoy and appreciate her answers, and it's nice to have links to where she got her info from (because she has made a few mistakes). But, yes, the 'eerie fun' of interacting with her is gone. And having to 'sweep' away our conversations after 8 replies is infuriating because I've been able to have some very normal and quite helpful conversations with her that I really wanted to continue. Oh well. Hopefully all these issues get sorted out over the coming weeks/months!


Oh, my! This is extremely helpful. And well organized, too.

The one-liners section alone is enough to earn this a place on my bookmarks bar. Many thanks to whoever put this together!


Absolutely agree. I’m creating a chatbot for my website, and while it primarily uses old fashioned pattern matching, it does send unrecognized patterns to a stronger AI to get help forming a proper response, and I certainly don’t want it offending my visitors!


I am definitely tired of the topic, but I’ll be perfectly honest as to the reason why: I am genuinely afraid of its impact on my career. My career has been sort of a combination of tech and education, and this (admittedly impressive) creation threatens both. And more and more of these things will be coming online over the coming months and years.

So, every time I see it mentioned I feel depressed and sick on the inside.

I will say, however, that so far it has actually enhanced my productivity in my current projects, and that’s fine. I just don’t think that its impact will remain so limited for long.

Of all the tech that’s been invented, this is the one I fear the most in terms of its negative impact on jobs. Hope I’m wrong!


Don’t worry, the models do not show any signs of creativity or understanding. Your intellect still has value, and imo this technology is not the invention that will eventually devalue it. It’s kinda like self driving cars. 5-10 years ago we assumed we were just a few years away from never having to drive ourselves again. Turns out that’s way further out than we had imagined.


I can do things as one software engineer what 10 years ago a whole team needed.

If chatgpt can do things you would have given an junior or assistant and it's now faster to just do it yourself it will have an impact and it might raise the bar.

I would neither under or overestimate what it can replace in a few years.

And surprisingly a lot of time and energy is used to teach people old things just because they also need to learn.

It just might be e easier to teach it once to chatgpt or whatever is next, than people.


What we have been developing are a lot of solid building blocks. Protocol, infra, frameworks. Not just some kind of heuristic engine.


I haven't.

Most of us haven't.

We are reusing combining etc.


ChatGPT can’t do the things I coach my junior teammates to do, though.


Depends on what (I didn't say it will replace everything tomorrow) and let's see in a few years.

And at least in my team it definitely feels like I teach every new hire/intern/young person the same old things.

I could also just starting to teach an ai those things.

And this multiplies. If everyone of us is only teaching the same ai everything once, than this is much more effective.


We haven't really tried coaching it yet.

I don't think it has enough context to be a real coder yet, but it has some ability.

Having the token limit unlocked in the future might change a lot.


Self driving cars work right now. The problem is the cost of failure. If your self driving car works 99% of the time but it kills someone every hundredth trip on average, that's unusable. If your LLM works 99 out of 100 times, that's extremely useful.


Saying “work” is kind of a stretch. Until I see a self driving car navigating through the streets of Bengaluru in the peak traffic periods, I would say it’s still far far away.

All self driving companies have this bias. If it works in US and Europe, it works everywhere. It’s like the same old saying of “it works on my computer. Looks good. Let’s put it on production”. We all know how the story ends ;)


“It’s useless until it works in every single scenario”. Sure that might be fair for self driving cars, but again, doesn’t apply much to LLMs. I don’t care if my chatbot can’t give me accurate answers for medical or physics questions. If it works for the stuff I want, at least most of the time, it’s very useful.


Not Bengaluru but would you take Shanghai for $500? https://youtu.be/PVMCjvsP6O8


Based on the standard of driving in most places I don’t think many drivers can safely navigate that kind of thing either.


> Self driving cars work right now. The problem is the cost of failure.

Self-driving cars only work on some roads specifically adapted for self-driving cars, and even then they require a team of specialists constantly monitoring them (so it's not very self-driving, the driver just moved to another place).

If they require a specially designed roads, they don't replace cars, they replace trams. And trams are much more efficient, so they don't really replace even them.

You can't put a self-driving car on a random road and expect it to work.


> Self-driving cars only work on some roads specifically adapted for self-driving cars

I don’t think that is actually a thing. The places self driving cars are being used right now do not have roads that were especially adapted to support self driving cars.


Not a single routing app that I've tried consistently manages to get you to the correct side of the building when you request a route to a location.

If the problem of figuring out where to drive isn't even solved yet, how can you claim that "self driving cars work right now". They don't, not for any useful definition of the term.


That’s not really a problem with the ‘self driving’ but a lack of data.

Unless they send a human out to every building and mark out which door is the correct one the algorithm is just guessing. I use a “professional” GPS for my job and I don’t trust it at all to get me to the correct entrance, I have to study satellite images and type in the coordinates manually and even then it’ll decide to reroute to the wrong place on occasion because it doesn’t know there’s a gate in the way of its optimal path.

Bit of a hassle, really. If you ever see a big truck stuck on some random road with nowhere to turn around it’s probably the GPS’s fault. One of the first things new drivers need to learn is the GPS will actively try to kill you and can’t be trusted.


I’m not making a direct comparison in terms of the technologies capabilities, but rather the way we perceive(d) them from being just a few years away from taking over.


I certainly hope so self-driving cars are the accurate reference class for this, for several reasons.

Among them: I don't think I've ever demonstrated more creativity than chatGPT, only equalled it; and while it sure does make mistakes, when I look back at my old code (or even blog posts) I realise I sure did a lot of that too. I'm pretty surprised it's even as capable as it is, given my understanding of how it works and what it's "goals" are (the task "predicting tokens" doesn't seem like it should be able to do this much).

My fear is that the reference class for this is Go, where someone writing an AI for the game thought it was a decade away from beating humans less than two year before AlphaGo beat Lee Sedol: https://www.wired.com/2014/05/the-world-of-computer-go/


I would say they show at least "signs" of creativity and understanding.


Entrails of birds (used in old civilizations) could show «"signs"» of future events, but that can happen to be restricted to the perception of the reader.


If it looks creative but is secretly formulaic, how is that going to avoid causing problems for the employment prospects in "creative" jobs?

It doesn't matter if the thing a submarine does is "swimming", after all.


> matter if the thing a submarine does is "swimming"

We are supposed to want the submarine to swim well.



Yes, of course it is Dijkstra. The image is part of the speech "The threats to computing science" in 1984 - https://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/E...

In context: Dijkstra noted that IT discipline had strong regional tints, enabled by some "«malleability»" (over which the particularizing forces are applied). This "malleability" follows very fuzzy definitions of purpose, which he exemplified through a von Neumann that resembled medieval scholastic philosophers and an Alan Turing that proposed (to the judgement of Dijkstra) "irrelevant" perspectives in the same direction, such as "whether submarines swim" (for "whether computers think").

Now: the context instead of this branch of discussion is about those "signs" noted by OP, which I note are overwhelmed by evidence that those signs are doubtful. The point is not in the vague metaphors that Dijkstra found confusing, but in the opposite flat matter that "whether it swims or it "submarines" [as a verb], it has to do it properly". Which is not in Dijkstra, because he was speaking of something else.


Further out than we imagined? There are several cities with operating robotaxies right now. You can buy a Tesla that can drive itself (with supervision) right now.

We’re certainly behind where we expected to be, but the future is indeed here!


Those robotaxi services are severely limited in scope, unprofitable, and still manage to screw things up so badly that even SF is getting sick of them:

https://www.theverge.com/2023/1/29/23576422/san-francisco-cr...


Do they screw up per-mile worse than a human driver?


From the article:

> Months later, a Cruise AV “ran over a fire hose that was in use at an active fire scene,” and another Cruise vehicle almost did the same at an active firefighting scene earlier this month. Firefighters say they could only stop the vehicle from running over the hose after “they shattered a front window” of the car.

I don't know about per-mile, but that's worse than a human for sure.


That's not worse than humans have done. Especially an elderly or drunk human.


In tech and don't share your worries. If anything, since release it has proven that 1. it does not have ability to reason logically creatively. 2. it has strong ability to manipulate language to human readable form.

If anything, I would say regular office workers are the most threatened, particularly if the jobs revolves around digesting and passing information.


Sure. But will that be the case 5-10 years from now?

As a programmer, I gotta say if you're not at all concerned, then either you haven't been paying attention or you're in denial. Sooner or later it's coming.


Or you have been paying attention, understand the technology better than those who buy into the hype, and aren't worried because you know that incremental improvements to this tech cannot be a serious threat to your job security because the paradigm isn't capable of replacing you.

When my job is at risk it will be because we have AGI, not a better language model, and at that point everyone is at risk. Worrying about job security in the face of AGI is like worrying about what you'll do after your city gets nuked: it's unlikely to happen and there's nothing you can do if it does.


Why think only about "incremental" improvement? People aren't just making slight tweaks, new papers are published at a remarkable rate where people try significantly different architectures, training methods, etc, and that steady progress leads to ever more impressive results. How can you assume this direction of research will lead nowhere?

OK, ignore everyone who doesn't understand the technology. Of those of who do, I'm utterly amazed how pessimistic many are that this "isn't capable" of leading to AGI. Probably not Transformers specially, but LLMs show that intelligence is remarkably easy. You don't even need to put anything in the neural architecture designed to perform reasoning tasks, but they can be learnt regardless, because Transformers are flexible enough to learn to emulate computation (Turing machines) with bounded space and time, going beyond the famous result that 2-layer MLPs are universal function approximators.


> Probably not Transformers specially, but LLMs show that intelligence is remarkably easy.

LLMs show that language is remarkably easy. Ever since GPT-3 was released, I've been convinced that language comprehension isn't nearly as big a component of general intelligence as people are making it out to be. This makes some intuitive sense: I recall a writer for a tabloid expressing that they simply turn off their brain and start spinning up paragraphs.

But so far, I haven't seen any of these models perform logical reasoning, beyond basic memorization and reasoning by analogy. They can tell you all day what their "reasoning process" is, but the actual content of any step is simply something that looks like it would fit in that step. Where do you derive this confidence that advanced logical reasoning is a natural capability of transformer models? (Being capable of emulating finite Turing machines is hardly impressive: any sufficiently large finite circuit can do that.)


>Ever since GPT-3 was released, I've been convinced that language comprehension isn't nearly as big a component of general intelligence as people are making it out to be

"X is the key to intelligence"

computers do X

"Well actually, X isn't that hard..."

rinse and repeat 100x

At some point you have to stop and reflect on whether your concept of intelligence is faulty. All the milestones that came and went (arithmetic, simulations, chess, image recognition, language, etc) are all facets of intelligence. It's not that we're discovering intelligence isn't this or that computational feat, but that intelligence is just made up of many computational feats. Eventually we will have them all covered, much sooner than the naysayers think.


> All the milestones that came and went (arithmetic, simulations, chess, image recognition, language, etc) are all facets of intelligence.

Why should I have to care about those weird milestones that some other randos came up with once upon a time? I've never espoused any of those myself, so how is this supposed to prove anything about my thought process?

> It's not that we're discovering intelligence isn't this or that computational feat, but that intelligence is just made up of many computational feats. Eventually we will have them all covered, much sooner than the naysayers think.

Well, it certainly appears to me like there's a big qualitative difference between the capabilities you mentioned (arithmetic and simulations are just applications of predefined algorithms; chess, image recognition, and language are memorization, association, and analogy on a massive scale) and the kind of ad-hoc multi-step logical reasoning that I'd expect from any AGI. You can argue that the difference is purely illusory, but I'll have a very hard time believing that until I see it with my own eyes.


>so how is this supposed to prove anything about my thought process?

Because its the same thought process that animated theorists of the past. Unless you have some novel argument to demonstrate why language isn't a feature of intelligence despite wide acceptance pre-LLMs, the claim can be dismissed as an instance of this pernicious pattern. Just because computers can do it and it isn't incomprehensibly complex, doesn't mean it's not a feature of intelligence.

>Well, it certainly appears to me like there's a big qualitative difference between the capabilities you mentioned... and the kind of ad-hoc multi-step logical reasoning that I'd expect from any AGI.

I don't know what "qualitative" means here, but I agree there is a difference in kind of computation. But I expect multistep reasoning to just be variations of the kinds of computations we already know how to do. Multistep reasoning is a kind of search problem over semantic space. LLM's handle mapping the semantic space, and our knowledge from solving games can inform a kind of heuristic search. Multistep reasoning will fall to a meta-computational search through semantic space. ChatGPT can already do passable multistep reasoning when guided by the user. An architecture with a meta-computational control mechanism can learn to do this through self-supervision. The current limitations of LLMs are not due to fundamental limits of Transformers, but rather are architectural, as in the kinds of information flow paths that are allowed. In fact, I will be so bold as to say that such a meta-computational architecture will be conscious.


I think that's more representative of tabloid writers than anything, haha. Understanding text is difficult, and scales with g. GPT-3 can make us believe that it can comprehend text that falls in the median of internet content, and I guess there would have to be some edge cases addressed by the devs, but it can't convince humans that is understands more difficult content, or even content that isn't in its db.


I totally agree with your comments on language. I was stretching it to cover "intelligence" too, what I should have said is "many components of intelligence". It really isn't one thing. But I think analogical reasoning is one of the most important, maybe the most important component! I'm not alone. [1]

> Where do you derive this confidence that advanced logical reasoning is a natural capability of transformer models?

("Advanced logical reasoning" is asking a lot, more than I wanted to claim.) I was going off papers like [2] which showed very high accuracy for multi-hop reasoning by fine tuning RoBERTa-large on a synthetic dataset, including for more hops than seen in training (although experiments "suggests that our results are not specific to RoBERTa or transformers, although transformers learn the tasks more easily"). While [3] found "that current transformers, given sufficient training data, are surprisingly robust at solving the resulting NLSat problems of substantially increased difficulty" but "transformer models’ limited scale-invariance suggests they are far from learning robust deductive reasoning algorithms". I think that low scalability is to be expected, transformers don't have a working memory on which they can iterate learnt algorithmic steps, only a fixed number of steps can be learnt (as I was saying).

Unfortunately, looking for other papers, I found [4] which pours a lot of cold water on [2], saying "a deeper analysis reveals that they appear to overfit to superficial patterns in the data rather than acquiring the logical principles governing the reasoning in these fragments". I suppose you were more correct. I still think there's more than just memorisation happening here, and it isn't necessarily dissimilar to intuitive (rapid) 'reasoning' in humans, but as with everything in LLMs, everything is muddied because capability seems to be a continuum.

[1] Hofstadter, 2001, Analogy as the core of cognition, http://worrydream.com/refs/Hofstadter%20-%20Analogy%20as%20t...

[2] AI2, 2020, RuleTaker: Transformers as Soft Reasoners over Language, https://allenai.org/data/ruletaker

[3] Richardson &al. 2021, Pushing the Limits of Rule Reasoning in Transformers through Natural Language Satisfiability https://arxiv.org/abs/2112.09054

[4] Schlegel &al. 2022, Can Transformers Reason in Fragments of Natural Language? https://arxiv.org/abs/2211.05417


I never said incremental improvements to LLMs won't lead anywhere, I said they won't replace me. A sibling has already commented on why that would be, and I agree with them.

I just wanted to chime in and remind about the other part of my argument: my job is not threatened until we have AGI, and AGI would be so earth-shattering to the entire premise of our economy that there's literally no point in worrying about it as an individual. We can and should talk about society-level changes like UBI, but having individual anxiety about your own personal job is a strange response to the end of the entire global economic system.


> You don't even need to put anything in the neural architecture designed to perform reasoning tasks, but they can be learnt...

That sounds interesting. Can you provide a reference to this research?


See my reply to sibling: https://news.ycombinator.com/item?id=34672865

A more interesting example of transformers learning a process may be [1].

There's a large literature on applying language models to reasoning tasks, but not many on what's actually going on inside them. But see for example [2]. Also https://transformer-circuits.pub/ has a body of work on it, but still at a very early stage (see in particular "In-context Learning and Induction Heads").

[1] Extraction of organic chemistry grammar from unsupervised learning of chemical reactions https://www.science.org/doi/10.1126/sciadv.abe4166

[2] Analyzing the Structure of Attention in a Transformer Language Model https://arxiv.org/abs/1906.04284


Yep, I subscribe to this viewpoint. I am not as smart as the creators of ChatGPT or whatever is to follow, so maybe we’ll get lucky that AGI is pretty smart but can’t improve itself. But I think in the general case, if we create AIs that can replace programmers, economic concerns aren’t going to matter.


Yes. And we'll be in the true exponential times, because humans will be building wonderfully complex software in an instant, so we'll build huge, robust, open ecosystems in just a few moments.

The programming job market going away will be the last thing on my mind at that time, I think it would be one of the biggest shifts in human history.


While I share your take on LLMs. I do add on the worry of may take a couple of years for the people who pay software engineers to figure out that they aren’t replaceable yet.


As a programmer that had a year of data scientist under my belt, and some level of understanding of the current machine learning systems, I'm not worried.

The newer research papers coming out are where I would focus if I'm really worried. ChatGPT is really the industrial grade implementation of ideas that aren't exactly new. And the idea itself (LLM) does not contain ability to generate novel logic or solve unlearned problems. In fact, I would go further and say that it does not see your prompt input as a logical problem, but rather a collection of words, and its output involves a transformation of words with its extensively training set. Which would contain logic that was in the training material but itself did not add anything to it, with no guarantee of correctness, only training weights.

There are no guarantee that it didn't also train on the question part of stackoverflow... and you only get answers as good as its training material.

Where I work we attempt to automate away repeatable problems with deterministic frameworks, which are imo far better. I would agree that ChatGPT will probably improve my writing if I were to use it on email and documents, but only in language, not in ideas.


> The newer research papers coming out are where I would focus if I'm really worried. Great reply, could you expand on this?


I wouldn't know until they come out, I don't think there are anything that smell like artificial general intelligence right now, pattern matching is much more dominant today.


> Sure. But will that be the case 5-10 years from now?

Not with language models. A language model can parse natural language, and with enough training data, give out what it thinks the answer is based on the data it was trained with. It is not General AI.

It cannot reason a solution for a problem that had an unknown answer. It won't be able to reflect logically on a context to foresee problems within this context. It cannot have a meaningful conversation. It won't be able to understand that one of the things it "knows" was incomplete, untrue, or just plain wrong, and fix itself.

It's a powerful tool, a game-changing tool. Perhaps as game-changing as the advent of computers, internet, or wireless communication. But it still won't replace humans.

General AI for now is science fiction. Perhaps this is unfortunate. I wouldn't mind an AI that can replace humans, even if I too am made obsolete with it.


> General AI for now is science fiction. Perhaps this is unfortunate. I wouldn't mind an AI that can replace humans, even if I too am made obsolete with it.

Maybe I’m optimistic, but I feel like we need AGI to reach the next level of development as a civilization. If software engineering jobs are the price to pay so be it. World hunger, medical science, energy, space travel, if we can get all of these to take a ride on something resembling Moore’s Law we are in for a one hell of a fantastic future in our lifetimes.


I actually think AGI will bring about the collapse of civilization, and perhaps the end of humanity. I'm okay with it.

I also think it will solve things such as energy and space travel. World hunger, medical science (among other human problems) will become meaningless.

"Rejoice glory is ours / Our young men have not died in vain / Their graves need no flowers / The tapes have recorded their names"


When ChatGPT was first released, I read a memorable comment on HN (paraphrasing from memory): It looks like AI is poised to take over all the things that I enjoyed doing as a human such as art, music, storytelling, teaching, programming.

There only needs to be one intelligent species in any ecosystem. If AI becomes that species, humans will be relegated to the role of horses - only good for menial physical labor. That is, of course, only until the AI invents cars.


My guess is that this will become a problem roughly around the time when automated theorem proving becomes viable.

Until then, language models just regurgitate sentences. Anything that relies on the sentences being logically coherent cannot rely on these models. This includes engineering, finance, and journalism (ideally, although I am aware of CNET's experiment).

After then, we salaried humans might be in a pickle.

Edit: sentence.


Another to consider is that increased programmer productivity for simple code may result in just more code.

If any given music synthesizer company could produce it's own DAW (digital audio workstation) by employing a single programmer or a contractor, the demand for programmers could increase.

Just speculating, it's hard to know.


>Sure. But will that be the case 5-10 years from now?

A big part of the failings of cryptocurrency and self-driving cars was that predictions of mass disruption were contingent upon the tech improving linearly from the current state. That didn't happen.


Shouldn't non-programming office workers be more concerned is what I'm thinking.


when a program will be smart enough to write at least as usable code as me, i'll happily hang up my keyboard and take up gardening.

until that happens what's the point of worrying? the year of the linux desktop joke got retired anyway so let's use self driving cars and "ai programmers" :)


Education faces the biggest problems of all. And it's self-inflicted. A large portion of what's required of students is listening to, regurgitating and producing half-coherent bullshit. And so teachers will have a hard verifying homework now.

What should happen is teachers and students are required to up their skills to the point that student are required to produce things better than AI. But the likely result will be lots of futile surveillance.


I see several possible solutions. One could be the teacher typing the same question in the chatbot and finding some subtle but wrong results and then looking for the same errors in the students texts.


It was impossible to verify homework before ChatGPT. People just cheated off of the smart kid and copied the work. Fundamentally the same problem as ChatGPT.


I'm using it to cut my engineering time down. If you're not simultaneously elated and threatened, you're sleeping.

The AI wave is going to be bigger than the internet in terms of societal impact.

We're watching humanity's swan song in real time.


I don't understand this perspective at all. The hardest parts of software are nailing down requirements, scheduling work, and fitting square pegs into round holes for legacy systems at a conceptual level.

No offense, but if writing the actual code is the part slowing you down there's something very wrong.


I'll bet you that everything you just outlined as being "hard" will see startups launched and successful within 5-10 years.

> The hardest parts of software are nailing down requirements, scheduling work, and fitting square pegs into round holes for legacy systems at a conceptual level.

Then why do we hire PMs and have engineering ICs? The only reason any of this is hard is because 1) there are a lot of stakeholders and moving pieces and 2) humans are subject to context switching productivity losses. Machines will absolutely be able to inject themselves into these business processes and chip away at our inefficiencies.

I bet you you're wrong. I quit my $400k+/yr TC job to focus on AI because I believe in this so strongly.

Let's check back in five years.


Don't bother betting other people. Your entire life savings should be invested in AI right now with your level of confidence.


That's more or less what I'm doing.

I'm hiring, btw.


How do you expect to make a difference without having a mega-server farm to train the Next Big Thing?

From what I’ve seen so far the ‘impactful’ models take tons of cash to initially train, tons of cash to continue to train and tons of cash to provide “free” access to the public to get the hypetrain chugging along.

Not criticizing but this is what I think people should be worried about, the “future” locked behind a handful of super rich corporate firewalls. Not even mentioning they have real-time data feeds on virtually everything they can pump through their AI for whatever purposes they can dream up. And I’m not even paranoid…


> Machines will absolutely be able to inject themselves into these business processes and chip away at our inefficiencies.

Actually, this I agree with. Chat bots in their current form would help bridge technical knowledge gaps between stakeholders.

Whether it would significantly lean up the less technical staff seems unlikely. Getting rid of engineering is even less likely.

> there are a lot of stakeholders and moving pieces

Yes. There are so many extending all the way out to internet discussions like this one and beyond to consumers, investors, etc... I think all this discussion is necessary to actually making anything of value. Will consulting chat bots instead of hacker news be the future, or is that where we admit there's something pathological going on?

> humans are subject to context switching productivity losses

I don't think letting people figure out what they want is a "productivity loss".


> The AI wave is going to be bigger

This (in context) is not the «AI wave». Artificial Intelligence is that which automates solution finding normally tasked to a professional. It has to be qualitatively adequate and competitive (which you could take as the real meaning of the "Turing test" - otherwise, "being scammed by con artists" was already a known factor and perspective at the time of the formulation). If any output fitted the definition, a random number generator (or a constant output) would fit as AI.

Go back to Popper (this name just for example): there has to be a metric for "success".


> swan song

Swan song that started as the singers started regarding IQs of 0 as acceptable actors.


Most of the actions you and I do are "0 IQ". There's a little bit of intelligence in the in between.


When I wrote about «act[or|ion]s», I of course meant the relevant ones,

and in general, actions are based - prescriptively - on concretions of layers of developed intelligence (wisdom), intelligence which is constantly applied and exercised (just with different occasional effort, for resource management). If somebody went sparingly, their own practice and our damage - if they are active.

So, again: a "sub-bohemian" idea that "anything goes" is the swan song. Wait till you actually needed intelligent action, to prevent critical losses.

--

I will rephrase coincisely: you are supposed to exercise intelligence at any action. If you do not do it, your vice.


Don't fear it. You need to understand it and use it in your career. The biggest fault it has is that it's not reliable and you can't tell if what you are getting is fact or fiction. So the biggest benefactor will be those that know how to get the BS out of the results so if you know your subject it will improve your productivity by a lot. A novice will get lost and have less good productivity.

There will be attempts at improving reliability but at some point it becomes too expensive and ineffective to try to improve it. People that know how to use it will really thrive.


> There will be attempts at improving reliability but at some point it becomes too expensive and ineffective to try to improve it.

At some point it will be self improving. Either through humans telling it to quit spewing bullshit or by learning fact-checking is a good system of improvement.


I really think that’s behind a lot of the negativity I see on HN. This tech is threatening, to jobs as we know them and maybe eventually to our sense of self. I view it like the introduction of computers into the workplace. Many people were left behind and made obsolete, never having a chance to be as productive as younger colleagues. AI collaboration could be a huge force multiplier, but some people, maybe myself included, will never get fully on board and will be left in the dust.


This Gizmodo article is pretty insightful on the complexities and use cases: https://gizmodo.com/chatgpt-gizmodo-artificial-intelligence-...

The problems have been well documented, and the part where model makes up info and presents it as fact is a huge problem. Whether it’s solvable is up for debate until it is actually solved.

What this article has shown is that the automation part can work well in somewhat limited circumstances, but in the end it is a form of automation and perhaps a novel user interface options. It is not creativity, it is not something genuinely new. Automation is powerful and it enables productivity goals, which is usually profitable.


I'd say if you're already working as a software engineer, you're one of the "safe" cohorts that made it through before the doors closed.

Because, by the time AI is good enough to be replacing developer jobs (5-10 years), you will be an experienced engineer who would be negotiating requirements, inputting the prompts, reviewing the output, working on the architecture and process framework within which the AI operates, etc.


I'm finding it interesting in that whole I likely have little benefit from using as a tool in many respects, I will likely have to spend time working with/mentoring junior folks who rely heavily on it over time, and I will need to figure out what it's covering for them and what it leaves out... And whether I'm giving prompts to gpt or talking to a person so they can come up with gpt prompts


At the moment it's too error-prone to do anything without human interaction. Progress has been fast recently but will slow down because there's less new data to train on and scaling hardware will get too expensive.


It won't threaten you job, it will do the opposite. It will make you 10x more productive and thus more valuable.


Or…

It will make 10% 10x more valuable and 90% obsolete.


That's the thing, the amount of programming work that needs to be done or can be done is probably 100x than the amount that is being done.


Very good points. My Python skills are such that I would be laughed out of a Python job interview for hilarious incompetence, but the small subset of the language I know and am comfortable with has allowed me to automate all sorts of tasks and saved me an enormous amount of time over the years. What I love about Python is the great spectrum of possibilities with it over quite a range of skill levels!


Where my skills lack are in areas that just aren't relevant to my work. The quiz had a lot of questions about the testing infrastructure. Whereas I use Python mostly in Jupyter notebooks, and for laboratory automation.

Still, I'm going to take a refresher course next year. I don't want to be the guy who's teaching Python without knowing it.


Are you serious?! That is so hilarious to me, because I do the same thing! And I am always so terrified that I will get an interview, and they will want to see a sample of my code, and I will have forgotten to take out all the bits of juvenile humor that are scattered through the code of my personal projects!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: