Hacker Newsnew | past | comments | ask | show | jobs | submit | krig's commentslogin

I’m using it in neovim right now. For vim, I guess you need a lsp plugin but it should work fine.


(reacting to the title alone since the article is paywalled)

AI can’t push a houseplant off a shelf, so there’s that.

Talking about intelligence as a completely disembodied concept seems meaningless. What does ”cat” even mean if comparing to something that doesn’t have a physical corporeal presence in time and space. To compare like this seems to me like making a fundamental category error.

edit: Quoting, “You’re going to have to pardon my French, but that’s complete B.S.”

I guess I’m just agreeing with LeCun here.


It's referring to the fact that cats are able to do tasks like multistage planning, which he asserts current AIs are unable to do.


> It's referring to the fact that cats are able to do tasks like multistage planning, which he asserts current AIs are unable to do.

I don't understand this criticism at all. If I go over to ChatGPT and say "From the perspective of a cat, create a multistage plan to push a houseplant off a shelf" it will satisfy my request perfectly.


ChatGPT only decides what to write one token at a time.


Hmmm... You didn't really explain yourself - I'm not sure I understand your point.

But guessing at what you mean - when I evaluate ChatGPT, I include all the trivial add-ons. For example, AutoGPT will create a plan like this and then execute the plan one step at a time.

I think it would be silly to evaluate ChatGPT solely as a single execution endpoint.


As I understand it a model simply predicts the next word- one word at a time. (The next token, actually, but for discussion sake we might pretend a token is identical to a word).

The model does not "plan" anything, it has no idea how a sentence will end when it starts it as it only considers what word comes next- then what word after that- then what word after that. It discovers the sentence is over when the next token turns out to be a period. It discovers it's finished it's assignment when the next token turns out to be a stop token.

So one could say the model provides the illusion of planning, but is never really planning anything other than what the next word to write is.


HEH.

Ok. Suppose I create the illusion of a calculator. I type in 5, then plus, then 5. And it gives me the illusional answer of 10.

What's the difference?


To use your metaphor the difference is we are discussing whether someone can invent a calculator that does new and exciting things using current calculator technology.

You can't say "who cares if it's an illusion it works for me" when the topic is whether an attempt to build a better one will work for the stated goal.


Way back at the top I explained that ChatGPT can indeed create a multistage plan. I encourage you to try it for yourself.

Otherwise, I think we should go our separate ways. You take care now.


I think I've explained several times it can't plan as it only does one word at a time, your so called multistage plan document is not planned out in advance, it is generated one word at a time with no plan.

If you don't care about the technical aspects, why ask in the first place what Yann LeCun meant?


Friend, it gave the answer 10, just like the illusionary calculator. I'm sorry you don't like how it got the answer.

You take care now.


"Now I know you said it can't plan, but what if we all agree to call what it does planning? That would be very exciting for me. I can produce a parable about a calculator if that would help. LeCun says it has limitations, but what if all agree to call it omniscient and omnipotent? That would also be very exciting for me."


Look man, 5 + 5 = 10, even if it's implemented by monkeys on typewriters.

This argument we're having is a version of the Chinese Room. I've never found Searle's argument persuasive, and I truly have no interest arguing it with you.

This is the last time I will respond to you. I hope you have a nice day.


I don't think we're having an argument about the Chinese room, because as far as I know Le Cunn does not argue AI can't have "A mind, understanding or consciousness". Not have I, I simply talked about how LLMs work, as I understand them.

There's a lot of confusion about these technologies, because tech enthusiasts like to exaggerate the state of the art's capabilities. You seem to be arguing "we must turn to philosophy to show CHATGPT is smarter than it would seem" which is not terribly convincing.


No.

Take care now.


Thanks, that makes more sense than the title. :)


We replaced the baity title with something suitably bland.

If there's a representative phrase from the article itself that's neutral enough, we could use that instead.


Out of curiosity, would you say a person with locked-in syndrome[0] is no longer intelligent?

[0]: https://en.wikipedia.org/wiki/Locked-in_syndrome


I don’t think ”intelligent” is a particularly meaningful concept, and just leads to such confusion as your comment hints at. Do I think a person with locked-in syndrome is still a human being with thoughts, desires and needs? Yes. Do I think we can rank intelligences along an axis where a locked-in person somehow rates lower than a healthy person but higher than a cat? I don’t think so. A cat is very good at being a cat, much better than any human is.


I would also point out that a person with locked-in syndrome still has ”a physical corporeal presence in time and space”, they have carers, relationships, families, histories and lives beyond themselves that are inextricably tied to them as an intelligent being.


Surely, this article is lampooning the idea that with enough rules and regulations, skill and experience don’t matter.

For a contrasting argument, I recommend reading Programming as Theory Building by Peter Naur.


It’s not just about typing out code even though being able to do that quickly is more valuable than you realize.

There are plenty of other areas where typing efficiently facilitates thinking about software.

- Writing clear notes and design documents helps in developing ideas.

- Writing documentation.

- Writing commit messages and issue descriptions.

- Writing emails and messages when communicating with others.

Typing efficiently is only one factor here, communicating clearly and efficiently, reading comprehension… these are also important. But typing slowly impedes development of those related skills as well.


I’ve seen people make exactly this mistake with Next.js. IMO React server components is a fantastic tool for losing track of what’s exposed client side and what isn’t.


Next.js makes you prefix env vars with NEXT_PUBLIC_ if you want them to be available client side, and Vercel has warning flags around it when you paste in those keys.

It's obviously not foolproof, but it's a good effort.


That’s env vars, but not actual variables - it’s really easy (if you are not actively context aware) to f.ex. pass a ”user” object from a server context into a client component and expose passwords etc to the client side.


That's a fair point! It definitely feels easier to make that mistake, and anything where context and discipline is required is a good candidate for making some horrifying blunders :)


If you add `import “server-only”` to the file, it will fail to compile if you to use it on the client. React also has more fine grained options where you can “taint” objects (yes that’s the real name).


Yeah, the problem is that these mitigations require the developer to be context aware, ”server-only” only saves you in the positive case where you correctly tagged your sensitive code as such. The default case is to expose anything without asking. I have also seen developers simply marking everything as ”use client” because then things ”just work” and the compiler stops complaining about useState in a server context etc.


I doubt that statement was meant to be some kind of absolute rule, big cats will definitely eat you alive if they feel like it. But at least you are perhaps more likely to get a quick death..

The advice I grew up with (in swedish bear country) is to do these things, in order:

1. Make noise while in the woods, bears will generally avoid you if they know you are there. You don’t want to startle a bear.

2. If charged, stay calm, make yourself big and talk to the bear, move slowly.

3. If attacked, play dead.

With luck, the bear will lose interest once it doesn’t perceive you as a threat. If it’s hungry, well, bad luck. There have been cases of people scaring bears off by punching them in the nose, so as a final resort I guess that’s something to try.


Grizzly: play dead. Black bear: fight back. Polar bear: make your peace with God.


If it's brown, lay down. If it's black, fight back. If it's white, good night.


If it's jelly, put it in your belly. If it's teddy, take it to beddy.


Sedges have edges, rushes are round. Grasses are hollow and grow in the ground.


In the hierarchy of fucking with things and getting fucked with ,think of yourself as one of the first. If your solution to problems is sentences that start with so, you are food. What can help is a warmup to kill display. Means,you show a little magic show of dexterity while moving forward. Throw a knife up and down,raise internal doubts.Suprises kill the predator mood.The predator being used to your presence is thus bad.


It’s not a physical law - it relates to people - “law” has a wider meaning outside the field of physics.

I recommend watching the talk! He specifically addresses the very points you bring up. Of course reality is messy and people especially so, but Conway’s law is one of very few theories within CS that has held up in replication studies.


If you want to dig into Conway's law and its implications, I can't recommend this video essay by Casey Muratori enough: https://youtu.be/5IUj1EZwpJY?si=dPxsXieBwZsP0PPP

Fair warning though, you'll lose all hope that companies like Microsoft will ever manage to produce anything that they don't then ruin.


>Fair warning though, you'll lose all hope that companies like Microsoft will ever manage to produce anything that they don't then ruin.

you are implying that we still had any hope left in such endeavors.


I've witnessed what Casey calls "Conway's Nightmare" (at 32:04 in his video) firsthand- it is definitely real. Basically, a software system isn't necessarily just a reflection of an organization, it's an amalgamation of all of the organizations that worked on the software system in the past.


Does Conway’s law imply that an organization of size 1 produces a system with components that are optimally integrated?

I sometimes wonder what would happen if a company hired 100 software engineers, gave them all the same task to program the entire thing independently on their own, and then paid them as a monotonic function of the ranking of their output against their peers. (Obviously, there are limits on what type of product could be developed here, but I think this still encompasses far more company products than you would suspect.)


> Does Conway’s law imply that an organization of size 1 produces a system with components that are optimally integrated?

It’s creative work, like anything else. 1 is not necessarily always the right size, but 100 certainly isn’t. If you look at musical bands usually a sweet spot is between 1-6. You can add more in eg an orchestra or choir but everyone’s not involved in the creative work.

Then it depends on how well you vibe with people. If you have extremely unique ideas and thoughts, you may be better off alone. If you find peers with similar values, preferences and experiences, you might be fine with 5 people.


This feels like something that is just crazy enough to work.

Most teams I've been on or managed were at their optimal efficiency with three people. Just enough to keep any one person from making too many poor decisions, but not to many to spend half a day in meetings.


An organization of size 1 has other constraints, like time.. But also, Conway's law applies outside the organization as well, communication constraints shape society at large and the way a society is organized limits the communication channels available.

On the topic of Microsoft/Linkedin, aren't they the ones who used to or maybe still assign the same task to multiple teams and let the teams compete for who can deliver? That does sound vaguely like what you propose.


Run all 100 in parallel in production and monitor for inconsistencies.


I remember one time we had to provide some important numbers to auditors. A data analyst on the team wrote some queries to collect this information, but my manager decided to write his own queries independently as a consistency check. The numbers didn’t match at all.

So he asked me to write my own queries as well to see who was right. Lo and behold, my numbers didn’t match either of theirs at all.


3 wrongs makes for a much more interesting conversation than 1


Fred brooks believes the Max is 1 or 2 (if they have experience together), and explains how to scale an org around those people.


Does he say what to do when they die, quit, or retire?


So clever! The brief answer is you have many staff who have knowledge but not creative authority. If you think about it for a few minutes I think you’ll find some options.


The best way to de-risk a project is to run three at once. Just don't let the teams find out about the other ones.


Oh. My. God.

This video explains everything I've seen in the enterprise IT of a huge government org that was the result of a long series of back-to-back mergers and splits.

It's exactly this!

Conway's law, but over time.

Dysfunction not just because of the large hierarchy, but also because of the built up layers of history. Some elements still limping along, some essentially amputated and mere phantom limbs of the org tree now, some outright dead and spreading miasma and corruption from their decay.

I knew or suspected most of this, but to hear it said out loud so clearly has crystallised my understanding of it.


Yeah once it hits, it really explains so much of the issues in large organizations (and explains a lot about small organizations / indie development as well). For example, why is it that mergers and buyouts rarely work out? Conway's law largely explains it! When a product is bought by an organization, the result is an integration over time of both organization org-charts combined. If the companies are both large, this immediately becomes an intractable mess. If the original company was small and the new company is large, it _also_ becomes intractable because now different teams will have to communicate to manage or split components that don't fit the communication patterns of the new organization, and will lead to large refactors and rewrites and in the end the essential qualities of the original software will inevitably change.

Even if you know this, there is nothing you can do - you have to organize yourself somehow and how you do that can't be left up to just mirror whatever this piece of software that was brought in happens to be organized - because _that too_ doesn't reflect an actual org chart but is an integration over time of all org charts that have touched it.

I don't even know how to think about the implications this has for the use of open source software and how open source is developed, like... I think there are huge opportunities for research into Conway's law and how it relates to software development practices, software quality, etc.


I have this naively optimistic hope that AIs will allow orgs to scale units past Dunbar’s number.

We humans can’t effectively communicate with groups larger than about 7 people, but AIs have no such limits. We could all simply talk to one manager that can integrate everything and everyone into a unified whole.

It’s like the ship Minds in the Culture series having hundreds or even thousands of conversations at once.


The more the initial fade of AI assisted work sets in, and given the inherent vagueness and unpredictability of managing, I'm eagerly awaiting not my job, but my bosses job being replaced by AI. There's no need for exactness, but superficial clarity, decisiveness and seeming coherence.


That's an interesting thought, yeah... technology and communication are definitely interwoven, things are possible today that were impossible before computers simply due to communication constraints. One could imagine a large number of practically autonomous small teams organized "automatically", more mirroring a neural network than a hierarchy.


The problem is always communication because it is the means to cooperate. The root of many issues in software development is the simple fact that instead of letting the required communication pathways define the organization, it is the organization which defines the pathways and through that causes communication obstructions.

"Not Just Bikes" has a good few videos, including https://www.youtube.com/watch?v=n94-_yE4IeU and a couple more that talk about problems that larger roads effectively cause more traffic ("induced demand", "traffic generation"). Organizational structures are like roads, and like roads they can get overloaded, which in turn means traffic needs to be reduced. There is even communication jam, and to combat that something like enforced communication reduction (lower information throughput), etc. to keep this manageable. That also causes decision making being done with less and less information the more steps are included in a communication chain (like upwards/downwards in a hierarchy), which in turn means the quality of decision making is severely hampered by it.

This whole mess is also the reason why the agile manifesto puts humans before processes and other such things, in fact it implies you change even the organizational setup to fit the project, not the other way around. But in the space of "managerial feudalism" (David Graeber) this is pretty much impossible to pull off.


The tragedy of agile is that the practices that are labelled agile in practice tend to exemplify the exact opposite of everything that the manifesto advocates..


You might be correct, but the AI minds that you are contemplating don't exist yet, and there is no reason to think that they will be developed from current LLMs.

Once again, seizing the term AI to mean LLMs and other current generative techniques has poisoned clear thinking. When we think "AI", we are thinking about HAL and Cortana and C3PO, which is not what we actually have.


interestingly positive, I think I might agree with you. It seems the nr.1 problem of good leaders in organizations is that they can't be everywhere at once. But an AI can be.


I've watched a lot of Casey Muratori's other presentations but just happened to find this one last week and I wholeheartedly agree. Like many people I'd heard of Conway's Law but always imagined it as a pithy truism and hadn't thought that the effects could run so deep.

Casey's example is focused on a (frankly, ridiculously) large organisation but I've seen the same thing in numerous small companies with just 20-30 developers, and it's hard not to imagine that this is universal, which is a pretty depressing thought.

Recently I've been involved in a new project where teams were set up to be 'domain-specific' with the idea that this somehow avoided the issues of Conway's Law, but this just feels like exactly the same problem because team structures and the software that they end up producing is siloed in exactly the same way as the organisation structures that existed at the time.

Casey's point that the final design is inherently limited by the lack of high-bandwidth communication between the groups of people that need it most is also fairly demotivating, personally.

Tangentially, having been writing more Golang recently this made me think of Golang's mantra (or was it Rob Pike's quote) of "Don't communicate by sharing memory; share memory by communicating.". Go/CSPs concurrency and sharing model is interesting, but I wonder if it's a fair analogue to compare sharing memory by communicating as low-bandwidth communication, and communication by sharing memory as the high-bandwidth communication that seems to be needed for effective designs.


What’s broken with current office 365? It’s a good product and service.


This is what you see from the outside. LinkedIn also looks good from outside. This is the entire point of the story.


If the user experience is good, then the product is good. Users don't care about build time or internal dependency management.


Build time & internal dependency management are just a few markers used to predict where user experience will go in the future


Making something look good is very popular but entirely different from actual quality.


> LinkedIn also looks good from outside.

I partially disagree. IIRC their search is broken, and their job alerts are pretty lousy at de-duplicating advertised positions.

And IIRC they hijack the back button, which I find pretty offensive.


The existence of the product is broken. Office suites should not be web applications.


To be fair, it’s only a web application if you want it to be. I have all the usual office products installed locally. When I get a link that opens a document in the browser I click open in app and off it goes.


Microsoft has moved very quickly to add LLM functionality to multiple products

Literally the opposite of LinkedIn. I’m approximately user 5000 on linkedin, and since I joined almost nothing useful has been added, despite a huge increase in complexity


LinkedIn and Microsoft are the same, that said I personally think the rush to push LLMs into everything is entirely consistent with my prediction and with the internal structure of Microsoft (current and historical).


Come on now, that's not true. Every week LinkedIn adds a new email marketing category which you haven't opted-out of yet, because it didn't exist before.


Exactly. Remember that we are not customers, we are the product. Presumably, LinkedIn is adding a lot of useful features for their actual customers, they're just not visible to us.


Thanks for writing about this. Makes me like Hetzner even more.


I was just about to write that. No crypto scam nonsense in the same datacenter sounds like a plus to me.

(On a more serious note, I imagine the reason Hetzner does not want crypto-related stuff is for the same reasons most hosting companies are not very fond of porn-related sites: High rates of fraud, charge-backs and other potential legal liability).


I don’t have any research to back this up but I am willing to bet that this is more due to social stigma than any genetic component.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: