Hacker Newsnew | past | comments | ask | show | jobs | submit | hintymad's commentslogin

Honest question: why do people automatically equate "fully autonomous weapons" to something like killer robot? My immediate reaction is that even the best-in-class rapid-fire gun has a hard time identifying and tracking drones. So, we'd need AI to do better tracking, which leads to a fully autonomous weapon. And I really don't get why that's a bad thing.

Of course, a company should have freedom to choose not to do business with the government. I just think that automatically assuming the worst intention of the government is not as productive as setting up good enough legal framework to limit government's power.


What you are describing would be "partially autonomous." Per Dario Amodei's original statement here: https://www.anthropic.com/news/statement-department-of-war he had no issue with that. "Fully autonomous" specifically means that the AI chooses a target and engages without any human intervention at all. If the human selects or approves a target, and the weapon then automates tracking and engagement, that's still only partially autonomous.

I’m not sure that “killer robot” is the actual concern outside of media hyperbole. I’m imagining a loitering munition-type drone that has some kind of targeting package loaded into it with different parameters describing what it should seek and destroy. Instead of waiting for intelligence and using human command to put the munition on target, it hangs out and then engages when it’s certain enough that it’s found something valid.

In a world where LLMs produce very convincing but subtly wrong output, this makes me uncomfortable. I get that warfare without AI is in the past now, but war and rules of engagement and AI output etc etc etc all seem fuzzy enough that this is not yet a good call even if you agree with the end goals.


> I’m imagining a loitering munition-type drone that has some kind of targeting package loaded into it with different parameters describing what it should seek and destroy. Instead of waiting for intelligence and using human command to put the munition on target, it hangs out and then engages when it’s certain enough that it’s found something valid.

I'm sorry, you've just literally described a "killer robot" in more words.


The only saving grace is that the killbots had a pre-set kill limit which I exceeded by throwing wave after wave of my own men at them until they simply shut down.

Yeah, I guess my point is that “killer robot” evokes a terminator-like image for a lot of people. Something that marches around and kills of its own accord. I don’t like either one, but I don’t think they’re the same thing.

Dario himself said that he was against using Claude to build a fully automated weapon because the technology was far from perfect, so he didn't want to hurt our soldiers or innocent people. I think his description matched a killer robot, and I don't agree with his reasoning because it's not like the military researchers didn't have the agency to find out what works and what doesn't.

On the other hand military researchers once considered training pigeons to act as torpedo guidance systems by pecking on levers.

We have traditional autonomous weapons (and counter-defense). They operate on millisecond or faster timescales with existing RF sensors. They are not and will not be using LLMs or other transformers. Maybe ChatGPT will update some realtime Ada code; they formally verify some of that stuff so maybe that won't be terrifyingly dangerous.

Where autonomous transformer-based munitions will be used are basically "here is a photo of a face, find and kill this human" and loitering munitions will take their time analyzing video and then decide to identify and attack a target on their own.

EDIT: Or worse: "identify suspicious humans and kill them"


We all do business with the government. We pay the military to protect our gold. It is fundamentally a protection racket that we voted for. And one could argue that the military, as the protector of your gold, has the final decision as to what it can and can't do with your technology.

Oh, you think the current administration only wants robots that kill other robots! Sweet Summer Child!

Its not fully autonomous ice cream machines, its fully autonomous _weapons_. are you stupid or are you dumb? I don't think you're asking an honest question.


Please define what kind of fully autonomous weapons system the Pentagon would build wouldn't be designed to kill people.

For that matter, explain why the Pentagon would balk at not spying on every American.


There has been tension between Qwen's research team and Alibaba's product team, say the Qwen App. And recently, Alibaba tried to impose DAU as a KPI. It's understandable that a company like Alibaba would force a change of product strategy for any number of reasons. What puzzled me is why they would push out the key members of their research team. Didn't the industry have a shortage of model researchers and builders?

Perhaps they wanted future Qwen models to be closed and proprietary, and the authors couldn't abide by that.

> I cannot be alone in feeling that titles (within "tech" in particular) are almost completely arbitrary?

I remember that 10 years or so ago, an E5 in Google is considered a pretty prestigious position. In Amazon, L6 is such a high achievement that the entire India site had one L6 for more than 300 engineers. But somehow things started to change. Everyone expected herself to get promoted every couple of years. There was a joke in Amazon along the line of L8 is the new L7.

My guess is that two factors came to play. One is that Meta (and then the Facebook) started to promote people really fast, so other companies followed. Also managers gradually treated promotion as a tool to retain the people they need. Once that's the incentive, a long title ladder becomes a natural choice.


Isn't being an engineering manager about leverage? Someone needs to organize people, allocate resources, or even decide the direction of products. We may say that ICs can make equally good such decisions, but every company has a hierarchy and someone does call the shots. And for better or for worse, some people are indeed good at navigating company dynamics and driving an organization forward, even though they may suck at building. An example would be IBM's Watson Jr. He was known for being awkward at mastering IBM tech as a salesperson. Even in a holacratic company like Zappos or Valve, some people still manage, right?

Richard Gabriel wrote a famous essay Worse Is Better (https://www.dreamsongs.com/WorseIsBetter.html). The MIT approach vs the New Jersey approach does not necessarily apply to the discussion of the merits of coding agent, but the essay's philosophy seems relevant. AI coding sometimes sacrifices correctness or cleanness for simplicity, but it will win and win big as long as the produced code works per its users' standards.

Also, the essay notes that once a "worse" system is established, it can be incrementally improved. Following that argument, we can say that as long as the AI code runs, it creates a footprint. Once the software has users and VC funding, developers can go back and incrementally improve or refactor the AI's mess, to a satisfying degree.


I hope people can ask themselves why the goal is "winning" and "winning big", and not making a product that you are proud of. It shouldn't be about VC funding and making money, shouldn't we all be making software to make the world a little bit better? I realize we live in an unfortunate reality surrounded by capitalism, but giving in to that seems shortsighted and dismissive of actual problems.

I hope people can see that "winning big" using that process is very unlikely NOT to be "winning long term".

(From GP) "AI coding sometimes sacrifices correctness or cleanness for simplicity, but it will win and win big as long as the produced code works per its users' standards."

Those user's standards are an ephemeral target for any software beyond a one-shot script or a hobby project with minimal user:dev ratio. That incorrect and unclean code simply isn't conducive to the many iterations needed when those "users' standards" change. And as we all know, that change is _inevitable_, and oftentimes happens before the software in question has even had a single release! Get ready to throw ever more tokens at trying to correct and clean if you ever really "win big" and need to actually support the product.

It's very much gross short-sighted thinking that goes right along with the gross short-sighted thinking providing all the [fake] value around this crap.


Some of my projects are built with the goal of making really good software. Some of my projects are built with the goal of making money. I take pride in doing things well, but I don't let my pride get between me and financial freedom.

“Winning” is just the the subjective word I quickly picked. It certainly can be another one, such as the success due to a great product as you mentioned

That is true, but society as a whole does not reward "making software to make the world a little bit better". No one will come and say wow, only you self in the mirror.

I have the same feeling when creating my art-works I suffer through the process of creation and learning. While someone makes money with an ai generated art work.

Sometimes I wonder if it matters at all.


> It shouldn't be about VC funding and making money, shouldn't we all be making software to make the world a little bit better

I agree with you, but I think you and I are on the wrong website for this mentality.


  Once the software has users and VC funding, developers can go back and incrementally improve or refactor the AI's mess, to a satisfying degree.
Or in my case, the AI is going back to refactor some poor human written code.

I will fully admit that AI writes better code than me and does it faster.


What definition of simplicity implies that it can be at odds with correctness?

I would say "facility" instead of "simplicity" here.

So essentially California is becoming more and more like EU? It's curious to see how it pans out. Maybe EU's model turns out to be better than a more laissez-faire world like the US. Who knows.

What's even more curious is that the California voters seem not care at all. As long as the government can collect more taxes with more altruistic slogans, the voters will stay happy.


Which EU law mandates age verification on my personal computer at home exactly?

> save face from the absurd overhiring that they did in 2022 and 2023

I wonder how we all of sudden got so many candidates back from 2020 to 2022


Speaking of Cursor, we use cursor's agent nowadays more than its IDE. But then why should a company pay for Cursor instead of just signing deals with the top model providers and then using things like OpenCode or their own coding agent? That will be more cost efficient as the company won't need to pay for the markup per token added by Cursor?

In the latest interview with Claude Code's author: https://podcasts.apple.com/us/podcast/lennys-podcast-product..., Boris said that writing code is a solved problem. This brings me to a hypothetical question: what if engineers stop contributing to open source, in which case would AI still be powerful enough to learn the knowledge of software development in the future? Or is the field of computer science plateaued to the point that most of what we do is linear combination of well established patterns?

> Boris said that writing code is a solved problem

That's just so dumb to say. I don't think we can trust anything that comes out of the mouths of the authors of these tools. They are conflicted. Conflict of interest, in society today, is such a huge problem.


There are bloggers that can't even acknowledge that they're only invited out to big tech events because they'll glaze them up to high heavens.

Reminds me of that famous exchange, by noted friend of Jeffrey Epstein, Noam Chomsky: "I’m not saying you’re self-censoring. I’m sure you believe everything you say. But what I’m saying is if you believed something different you wouldn’t be sitting where you’re sitting."


Its all basically: Sensationalist take to shock you and get attention

> That's just so dumb to say

Depends. Its true of dumb code and dumb coders. Anorher reason why yes, smart pepple should not trust.


He is likely working on a very clean codebase where all the context is already reachable or indexed. There are probably strong feedback loops via tests. Some areas I contribute to have these characteristics, and the experience is very similar to his. But in areas where they don’t exist, writing code isn’t a solved problem until you can restructure the codebase to be more friendly to agents.

Even with full context, writing CSS in a project where vanilla CSS is scattered around and wasn’t well thought out originally is challenging. Coding agents struggle there too, just not as much as humans, even with feedback loops through browser automation.


It's funny that "restructure the codebase to be more friendly to agents" aligns really well with what we have "supposed" to have been doing already, but many teams slack on: quality tests that are easy to run, and great documentation. Context and verifiability.

The easier your codebase is to hack on for a human, the easier it is for an LLM generally.


Turns out the single point of failure irreplaceable type of employees who intentionally obfuscated the projects code for the last 10+ years were ahead of their time.

I had this epiphany a few weeks ago, I'm glad to see others agreeing. Eventually most models will handle large enough context windows where this will sadly not matter as much, but it would be nice for the industry to still do everything to make better looking code that humans can see and appreciate.

It’s really interesting. It suggests that intelligence is intelligence, and the electronic kind also needs the same kinds of organization that humans do to quickly make sense of code and modify it without breaking something else.

Truth. I've had much easier time grappling with code bases I keep clean and compartmentalized with AI, over-stuffing context is one of the main killers of its quality.

Having picked up a few long neglected projects in the past year, AI has been tremendous in rapidly shipping quality of dev life stuff like much improved test suites, documenting the existing behavior, handling upgrades to newer framework versions, etc.

I've really found it's a flywheel once you get going.


All those people who thought clean well architected code wasn’t important…now with LLMs modifying code it’s even more important.

> He is likely working on

... a laundry list phone app.


Even as the field evolves, the phoning home telemetry of closed models creates a centralized intelligence monopoly. If open source atrophies, we lose the public square of architectural and design reasoning, the decision graph that is often just as important as the code. The labs won't just pick up new patterns; they will define them, effectively becoming the high priests of a new closed-loop ecosystem.

However, the risk isn't just a loss of "truth," but model collapse. Without the divergent, creative, and often weird contributions of open-source humans, AI risks stagnating into a linear combination of its own previous outputs. In the long run, killing the commons doesn't just make the labs powerful. It might make the technology itself hit a ceiling because it's no longer being fed novel human problem-solving at scale.

Humans will likely continue to drive consensus building around standards. The governance and reliability benefits of open source should grow in value in an AI-codes-it-first world.


> It might make the technology itself hit a ceiling because it's no longer being fed novel human problem-solving at scale.

My read of the recent discussion is that people assume that the work of far fewer number of elites will define the patterns for the future. For instance, implementation of low-level networking code can be the combination of patterns of zeromq. The underlying assumption is that most people don't know how to write high-performance concurrent code anyway, so why not just ask them to command the AI instead.


> The underlying assumption is that most people don't know how to write high-performance concurrent code anyway, so why not just ask them to command the AI instead.

The data economics reflexivity of LLM input means that when you reduce the future volume of that input to the few experts who "know how to write X anyway", the LLM labs just lost one of the most important inputs. All those non-experts who voted with their judgement and left in the wake of their effort to use the expert-written code, grist for the LLM input weighing mill.

I find it is usually the non-experts that run into the sharp operational edges the experts didn't think of. When you throw the non-experts out of the marketplace of ideas, you're often left with hazardous tooling that would just as soon cut your hand off than help you. It would be a hoot if the LLM's and experts decided to output everything and training in Common Lisp, though.

If handed just Babbage's Difference Engine, or the PDP-11 Unix V7 source code and nothing else, LLM's could speed-run and eventually re-derive the analogs of Zig, ffmpeg, YouTube, and themselves, I'll grant that "just let them cook with the experts" is a valid strategy. The information imparted by the activity around the source code is deeply recursive, and absent that I'm not sure how the labs are going to escape a local minima they're digging themselves into by materially shrinking that activity. If my hypothesis is correct, then LLM labs are industrial-scale stripping away the very topsoil that their products rely upon, and it is a single-turn cheap game that gets enormously more expensive in further iterations to create synthetic topsoil.


>My read of the recent discussion is that people assume that the work of far fewer number of elites will define the patterns for the future.

Even if we assume that's true, what will prevent atrophy of the skillset among the elites with such a small pool of practitioners?


Money and fame. Lots of money and fame.

There is no shortage of Olympic hopeful elite athletes every four years, despite the incredibly small pool of competitors at each Games.

Same for musicians.

This kind of Winner-Take-All Economics or Superstar Market is what capital wants in their ideal world in markets with near-zero marginal costs of distribution. Even if software creation in the long-term does not fall to this kind of labor market, LLM's can establish a "market can be irrational longer than you can stay solvent" dynamic where capital can run the labor market like this for software for a generation or three before having to face the reflexivity music, like they did for US manufacturing.


And it mostly happens in government funded and/or commercially viable sports, with public schools where kids train for free, scholarships, numerous competitions etc. To gather those selected elites we take an enormous pool of aspiring athletes and support them from the ground up (usually with our taxes)

Where such support systems don’t exist you have a relatively shallow talent pool, and the best performers are a far cry from what could have been possible otherwise


I think you mean software engineering, not computer science. And no, I don’t think there is reason for software engineering (and certainly not for computer science) to be plateauing. Unless we let it plateau, which I don’t think we will. Also, writing code isn’t a solved problem, whatever that’s supposed to mean. Furthermore, since the patterns we use often aren’t orthogonal, it’s certainly not a linear combination.

I assume that new business scenarios will drive new workflows, which requires new work of software engineering. In the meantime, I assume that computer science will drive paradigm shift, which will drive truly different software engineering practice. If we don't have advances in algorithms, systems, and etc, I'd assume that people can slowly abstract away all the hard parts, enabling AI to do most of our jobs.

Or does the field become plateaued because engineers treat "writing code" as a "solved problem?"

We could argue that writing poetry is a solved problem in much the same way, and while I don't think we especially need 50,000 people writing poems at Google, we do still need poets.


> we especially need 50,000 people writing poems at Google, we do still need poets.

I'd assume that an implied concern of most engineers is how many software engineers the world will need in the future. If it's the situation like the world needing poets, then the field is only for the lucky few. Most people would be out of job.


I saw Boris give a live demo today. He had a swarm of Claude agents one shot the most upvoted open issue on Excalidraw while he explained Claude code for about 20 minutes.

No lines of code written by him at all. The agent used Claude for chrome to test the fix in front of us all and it worked. I think he may be right or close to it.


Did he pick Excalidraw as the project to work on, or did the audience?

It's easy to be conned if you're not looking for the sleight of hand. You need to start channelling your inner Randi whenever AI demos are done, there's a lot of money at stake and a lot of money to prep a polished show.

To be honest, even if the audience "picked" that project, it could have been a plant shouting out the project.

I'm not saying they prepped the answer, I'm saying they prepped picking a project it could definitely work on. An AI solvable problem.


That is the same team that has an app that used React for TUI, that uses gigabytes to have a scrollback buffer, and that had text scrolling so slow you could get a coffee in between.

And that then had the gall to claim writing a TUI is as hard as a video game. (It clearly must be harder, given that most dev consoles or text interfaces in video games consistently use less than ~5% CPU, which at that point was completely out of reach for CC)

He works for a company that crowed about an AI-generated C compiler that was so overfitted, it couldn't compile "hello world"

So if he tells me that "software engineering is solved", I take that with rather large grains of salt. It is far from solved. I say that as somebody who's extremely positive on AI usefulness. I see massive acceleration for the things I do with AI. But I also know where I need to override/steer/step in.

The constant hypefest is just vomit inducing.


I wanted to write the same comment. These people are fucking hucksters. Don’t listen to their words, look at their software … says all you need to know.

>writing code is a solved problem

sure is news for the models tripping on my thousands of LOC jquery legacy app...


Could the LLM rewrite it from scratch?

boss, the models can't even get all the api endpoints from a single file and you want to rewrite everything?!

not to mention that maybe the stakeholders don't want a rewrite, they just to modernize the app and add some new features


My prediction: soon (e.g. a few years) the agents will be the one doing the exploration and building better ways to write code, build frameworks,... replacing open source. That being said software engineers will still be in the loop. But there will be far less of them.

Just to add: this is only the prediction of someone who has a decent amount of information, not an expert or insider


I really doubt it. So far these things are good at remixing old ideas, not coming up with new ones.

Generally us humans come up with new things by remixing old ideas. Where else would they come from? We are synthesizing priors into something novel. If you break the problem space apart enough, I don't see why some LLM can't do the same.

LLM's cannot synthesize text, they can only concatenate or mix statistically. Synthesis requires logical reasoning. That's not how LLMs work.

Yes it is, LLMs perform logical multi step reasoning all the time, see math proofs, coding etc. And whether you call it synthesis or statistical mixing is just semantics. Do LLMs truly understand? Who knows, probably not, but they do more than you make it out to be.

I don't want to speak too much out of my depth here, I'm still learning how these things work on a mechanical level, but my understanding of how these things "reason" is it seems like they're more or less having a conversation with themselves. IE, burning a lot of tokens in the hopes that the follow up questions and answers it generates leads to a better continuation of the conversation overall. But just like talking to a human, you're likely to come up with better ideas when you're talking to someone else, not just yourself, so the human in the loop seems pretty important to get the AI to remix things into something genuinely new and useful.

They do not. The "reasoning" is just adding more text in multiple steps, and then summarizing it. An LLM does not apply logic at any point, the "reasoning" features only use clever prompting to make these chains more likely to resemble logical reasoning.

This is still only possible if the prompts given by the user resembles what's in the corpus. And the same applies to the reasoning chain. For it to resemble actual logical reasoning, the same or extremely similar reasoning has to exist in the corpus.

This is not "just" semantics if your whole claim is that they are "synthesizing" new facts. This is your choice of misleading terminology which does not apply in the slightest.


There's so many timeless books on how to write software, design patterns, lessons learned from production issues. I don't think AI will stop being used for open source, in fact, with the number of increasing projects adjusting their contributor policies to account for AI I would argue that what we'll see is always people who love to hand craft their own code, and people who use AI to build their own open source tooling and solutions. We will also see an explosion is needing specs for things. If you give a model a well defined spec, it will follow it. I get better results the more specific I get about how I want things built and which libraries I want used.

> is the field of computer science plateaued to the point that most of what we do is linear combination of well established patterns?

Computer science is different from writing business software to solve business problems. I think Boris was talking about the second and not the first. And I personally think he is mostly correct. At least for my organization. It is very rare for us to write any code by hand anymore. Once you have a solid testing harness and a peer review system run by multiple and different LLMs, you are in pretty good shape for agentic software development. Not everybody's got these bits figured out. They stumble around and them blame the tools for their failures.


> Not everybody's got these bits figured out. They stumble around and them blame the tools for their failures.

Possible. Yet that's a pretty broad brush. It could also be that some businesses are more heavily represented in the training set. Or some combo of all the above.


"Writing code is a solved problem" disagree.

Yes, there are common parts to everything we do, at the same time - I've been doing this for 25 years and most of the projects have some new part to them.


Novel problems are usually a composite of simpler and/or older problems that have been solved before. Decomposition means you can rip most novel problems apart and solve the chunks. LLMs do just fine with that.

The creator of the hammer says driving nails into wood planks is a solved problem. Carpenters are now obsolete.

Prediction: open source will stop.

Sure, people did it for the fun and the credits, but the fun quickly goes out of it when the credits go to the IP laundromat and the fun is had by the people ripping off your code. Why would anybody contribute their works for free in an environment like that?


I believe the exact opposite. We will see open source contributions skyrocket now. There are a ton of people who want to help and share their work, but technical ability was a major filter. If the barrier to entry is now lowered, expect to see many more people sharing stuff.

Yes, more people will be sharing stuff. And none of it will have long term staying power. Or do you honestly believe that a project like GCC or Linux would have been created and maintained over as long as they have been by the use of AI tools in the hands of noobs?

Technical ability is an absolute requirement for the production of quality work. If the signal drowns in the noise then we are much worse off than where we started.


I’m sure you know the majority of GCC and Linux contributors aren’t volunteers, but employees who are paid to contribute. I’m struggling to name a popular project that it isn’t the case. Can you?

If AI is powerful enough to flood open source projects with low quality code, it will be powerful enough to be used as gatekeeper. Major players who benefit from OSS, says Google, will make sure of that. We don’t know how it will play out. It’s shortsighted to dismiss it all together.


> I’m struggling to name a popular project that it isn’t the case. Can you?

There’s emacs, vim, and popular extensions of the two. OpenBSD, lots of distros (some do develop their own software), SDL,…


Ok but now you have raised the bar from "open source" to "quality work" :)

Even then, I am not sure that changes the argument. If Linus Torvalds had access to LLMs back then, why would that discourage him from building Linux? And we now have the capability of building something like Linux with fewer man-hours, which again speaks in favor of more open source projects.


Many did it for liberty - a philosophical position on freedom in software. They're supercharged with AI.

I don’t believe people who have dedicated their lives to open source will simply want to stop working on it, no matter how much is or is not written by AI. I also have to agree, I find myself more and more lately laughing about just how much resources we waste creating exactly the same things over and over in software. I don’t mean generally, like languages, I mean specifically. How many trillions of times has a form with username and password fields been designed, developed, had meetings over, tested, debugged, transmitted, processed, only to ultimately be re-written months later?

I wonder what all we might build instead, if all that time could be saved.


> I don’t believe people who have dedicated their lives to open source will simply want to stop working on it, no matter how much is or is not written by AI.

Yeah, hence my question can only be hypothetical.

> I wonder what all we might build instead, if all that time could be saved

If we subscribe to Economics' broken-window theory, then the investment into such repetitive work is not investment but waste. Once we stop such investment, we will have a lot more resources to work on something else, bring out a new chapter of the tech revolution. Or so I hope.


> If we subscribe to Economics' broken-window theory, then the investment into such repetitive work is not investment but waste. Once we stop such investment, we will have a lot more resources to work on something else, bring out a new chapter of the tech revolution. Or so I hope.

I'm not sure I agree with the application of the broken-window theory here. That's a metaphor intended to counter arguments in favor of make-work projects for economic stimulus: the idea here is that breaking a window always has a net negative on the economy, since even though it creates demand for a replacement window, the resources that are necessary to replace a window that already existed are just being allocated to restore the status quo ante, but the opportunity cost of that is everything else the same resources might have bee used for instead, if the window hadn't been broken.

I think that's quite distinct from manufacturing new windows for new installations, which is net positive production, and where newer use cases for windows create opportunities for producers to iterate on new window designs, and incrementally refine and improve the product, which wouldn't happen if you were simply producing replacements for pre-existing windows.

Even in this example, lots of people writing lots of different variations of login pages has produced incremental improvements -- in fact, as an industry, we haven't been writing the same exact login page over and over again, but have been gradually refining them in ways that have evolved their appearance, performance, security, UI intuitiveness, and other variables considerably over time. Relying on AI to design, not just implement, login pages will likely be the thing that causes this process to halt, and perpetuate the status quo indefinitely.


> Boris said that writing code is a solved problem.

No way, the person selling a tool that writes code says said tool can now write code? Color me shocked at this revelation.

Let's check in on Claude Code's open issues for a sec here, and see how "solved" all of its issues are? Or my favorite, how their shitty React TUI that pegs modern CPUs and consumes all the memory on the system is apparently harder to get right than Video Games! Truly the masters of software engineering, these Anthropic folks.


Even if you like them, I don't think there's any reason to believe what people from these companies say. They have every reason to exaggerate or outright lie, and the hype cycle moves so quickly that there are zero consequences for doing so.

Or software engineers are not coachmen while AI is diesel engine to horses. Instead, software engineers are mistrels -- they disappear if all they do is moving knowledge from one place to another.

But does the agent have people skills?? I'm good at dealing with people.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: