Hacker Newsnew | past | comments | ask | show | jobs | submit | captainkrtek's commentslogin

There seems to be so much value in planning, but in my organization, there is no artifact of the plan aside from the code produced and whatever PR description of the change summary exists. It makes it incredibly difficult to assess the change in isolation of its' plan/process.

The idea that Claude/Cursor are the new high level programming language for us to work in introduces the problem that we're not actually committing code in this "natural language", we're committing the "compiled" output of our prompting. Which leaves us reviewing the "compiled code" without seeing the inputs (eg: the plan, prompt history, rules, etc.)


One challenge with code review as an antidote to poor quality gen-AI code, is that we largely see only the code itself, not the process or inputs.

In the pre-gen-AI days, if an engineer put up a PR, it implied (somewhat) they wrote their code, reviewed it implicitly as they wrote it, and made choices (ie: why is this the best approach).

If Claude is just the new high level programming language, in terms of prompting in natural language, the challenge is that we're not reviewing the natural language, we're reviewing the machine code without knowing what the inputs were. I'm not sure of a solution to this, but something along the lines of knowing the history of the prompting that ultimately led to the PR, the time/tokens involved, etc. may inform the "quality" or "effort" spent in producing the PR. A one-shotted feature vs. a multi-iteration feature may produce the same lines of code and general shape, but one is likely to be higher "quality" in terms of minimal defects.

Along the same lines, when I review some gen-AI produced PR, it feels like I'm reading assembly and having to reverse how we got here. It may be code that runs and is perfectly fine, but I can't tell what the higher level inputs were that produced it, and if they were sufficient.


Will there be an interest in vision based wearables?

Google Glasses - dead

Apple Vision Pro - dead

FB/Meta x RayBan - dead soon(?)

It seems they can’t get over the social hurdle of having a camera strapped to your face, and the effects of that on people around you. I think the tech is neat, but not socially accepted as a concept to make it viable. My sister is big into tiktok and filming all the time, and it personally makes me hesitant to be nearby as I’m not comfortable being filmed all the time.


I don't want people with camera glasses around me either. But the stupid thing is: they don't even need to exist. The Google glass can show its notifications just fine without a camera. My Xreal Air works great without one.

It's the big tech companies that are pushing for pervasive cameras. Not consumers saying they can't live without a camera on their face.


It is almost certainly a problem with size, cost, and features.

The wearables are just too big, too expensive, and the feature set too small.

Much like with VR goggles, every problem they solve is solved far better and more cheaply with another device most people already have and use.

I don't think it has anything to do with the moral or social implications of taking pictures of people privately. The second any of the above are resolved, society will willingly give up even more privacy without a hiccup, as we've done every other time the choice was presented.


Agreed. But perhaps that’s the problem? Instead of trying to go instantly mainstream via the consumer market, perhaps the tie-hold are niche professional / commercial markets? Or niche consumers markets provided by the business (e.g., museums)?

It’s not a tech issue, it’s a marketing issue (and lack of imagination).


I think it goes beyond the social hurdle. I have an Oculus, and I just never use it. A phone or laptop screen generally just feels good enough. It's easier to start and stop using, and it doesn't feel like I'm shutting myself off from the world when I do.


I use my oculuses a LOT. All the time. They're great for gaming and watching content.


50+ weeks? so a year?

I've been in big tech for 12+ years now. The first handful of years are definitely a grind to earn your spot, get a couple promos. After that though, it can become quite a bit easier to coast if that's what you're looking for. People know you, know you're probably valuable cause you're "senior" or "staff" and still here, and likely leave you alone. But yeah, as a newer engineer these days, it still requires the initial commitment to earn the privilege of coasting in a big tech company.


> 50+ weeks? so a year?

Maybe they meant "50+ [hour] weeks"


My biggest problem with usage of an LLM in coding is that it removes engineers from understanding the true implementation of a system.

Over the years, I learned that a lot of one's value as an engineer can come from knowing how things actually work. I've been in many meetings with very senior engineers postulating how something works arguing back and forth, when quietly one engineer taps away on their laptop, then spins it around to say "no, this is the code here, this is how it actually works".


Agreed, seen a number of short form news pieces / docs on the effects of datacenter development across different parts of America. Pollution, noise, lights, water impacts, energy costs, etc. not a lot to like from them, and they create very few jobs in relation to the community.


AI data centers will be the job destroyers, not creators.

100 local people to maintain the data center while it replaces 1 million people with the AIs running inside


If we can deal with the personal economics of the transition, isn’t freeing up human capital to do something else a good thing?


Yes, unfortunately we cannot deal with the personal economics of such a transition :)


The upper class who holds all the power does not want people to have good life. They want to extract as much as possible from most of us.

So, no, because said human capital is holding shorter end of the stick and will be worst off.


CGP Grey once asked "What happens to humans when it becomes uneconomic to employ them?" eg, the value of their economic output is functionally zero.


If you like speculative fiction on this topic, read Manna by Marshall Brain while you still can (the author died not long ago, so it may not stay up).

https://marshallbrain.com/manna1


We should just develop cold fusion. It's gotta be easy, right?


I 100% agree that AI data centers are bad for people.

In my opinion, Compute-related data centers are a good product tho. Offering up some gpu services might be good but honestly I will tell you what happened (similar to another comment I wrote)

AI gave these data centers companies tons of money (or they borrowed) and then they brought gpus from nvidias and became gpu-centric (also AI centric) to jump even more on the hype

these are bad The core offering of datacenters to me feels like it should be normal form of compute (CPU,ram,storage,as an example yabs performance of the whole server) and not "just what gpu does it have"

Offering up some gpu on the side is perfectly reasonable to me if need be perhaps where the workloads might need some gpu but overall compute oriented datacenters seem nice.

Hetzner is a fan favourite now (which I deeply respect) and for good measure and I feel like their modelling is pretty understandable, They offer GPU's too iirc but you can just tell from their website that they love compute too

Honestly the same is true for most Independent cloud providers. The only places where we see a complete saturation of AI centric data centers is probably the American trifecta (Google,azure and amazon) and Of course nvidia,oracle etc.

Compute oriented small-to-indie data centers/racks are definitely pleasant although that market has raced to the bottom, but only because let's be really honest, The real incentives for building softwares happens when VSCode forks make billions so people (techies atleast) usually question such path and non-techies just don't know how to sell/compete in the online marketplaces usually.


I'm no economist, but if (when?) the AI bubble bursts and demand collapses at the price point memory and other related components are at, wouldn't price recover?

not trying to argue, just curious.


I'm no economist either, but I imagine the manufacturing processes for the two types of RAM are too different for supply to quickly bounce back.


IF a theoretical AI bubble bursts sure. However the largest capitalized companies in the world and all the smartest people able to do cutting edge AI research are betting otherwise. This is also what the start of a takeoff looks like


As a customer of GitHub actions, anecdotally feels like Github experiences issues frequently enough to make this not a problem.


I've lived in Seattle my whole life, and have worked in tech for 12+ years now as a SWE.

I think the SEA and SF tech scenes are hard to differentiate perfectly in a HN comment. However, I think any "Seattle hates AI" has to do more with the incessant pushing of AI into all the tech spaces.

It's being claimed as the next major evolution of computing, while also being cited as reasons for layoffs. Sounds like a positive for some (rich people) and a negative for many other people.

It's being forced into new features of existing products, while adoption of said features is low. This feels like cult-like behavior where you must be in favor of AI in your products, or else you're considered a luddite.

I think the confusing thing to me is that things which are successful don't typically need to be touted so aggressively. I'm on the younger side and generally positive to developments in tech, but the spending and the CEO group-think around "AI all the things" doesn't sit well as being aligned with a naturally successful development. Also, maybe I'm just burned out on ads in podcasts for "is your workforce using Agentic AI to optimize ..."


I think the obvious things are:

- Deviation in consistency/texture/color/etc.

- Obvious signs related to the above (eg: diarrhea, dehydration, blood in stool).

Ultimately though, you can get the same results by just looking down yourself and being curious if things look off...

tldr: this feels like literal internet-of-shit IoT stuff.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: