Hacker Newsnew | past | comments | ask | show | jobs | submit | ghc's commentslogin

Among enterprises I work with, I'm seeing way more migration to self-hosted Gitlab than I was a few years ago. Even among Azure-dependent orgs.

I think there’s some risk with this though too - more and more is behind the enterprise tier. People try to work around this in various ways but its an unsatisfying experience. For e.g. trying to enforce merge request approval with pipeline stages.

In my experience there's a pretty fundamental difference between business consultants and consultants who "build stuff". I've done both and had similar experience to both your experience and GPs experience, but I'd put it down to the expected role of the consultant, rather than the customer.

Great news! I hope prices go back down a bit thanks to the extra capacity. I used to take Acela 3 times a week about a decade ago and they were rarely completely full. Now they're more expensive and fully booked much of the time, which is a real shame.

How is "full booked" a real shame?

Needing to book days in advance makes it unusable for short-notice trips (vs. driving), and due to the demand they basically doubled prices. It's now more expensive to take Acela than it is to take a plane; that wasn't the case a decade ago.

Rail should be easy to use.

I live in Switzerland where people are so comfortable taking the train they treat it like an extension of their living room.

Only in rare cases do I even book tickets in advance, like when going to Milano… otherwise I just use the Fairtiq app, which is a nation wide system for paying for tickets, including busses and trams…

You swipe right before you step on, swipe left when you step off and the system automatically calculates the best ticket for you.

There isn’t a “fully booked”.


Switzerland is also the size of one of the smallest states in America.

And, what’s your point?

Its easy to do in a small area, harder in a big area like the US.

I visited Switzerland recently and loved the train network. One really awesome feature was that the train stations basically doubled as shopping malls. Which makes a lot of sense, imo!

We'd leave our room for the day, have breakfast at a restaurant or coffee shop in the train station, then jump on the train to whatever outing we had planned. At the end of the day, we'd take the train back, pick up some groceries at one of the grocery stores in the station (I saw at least two major grocery stores in our station), and then head to the room and make dinner. I also needed to visit a pharmacy at one point during our stay, and the only pharmacy open at that sleepy hour was at the train station.

The train stations are really major hubs for the towns. Even if I didn't need to take the train that day, I was still likely to make a trip down to the train station for something. It's smart.


The new trains can go to 160mph, which is slightly faster than the old ones. But...Acela's speed is really limited by the safety of the tracks not the train sets. I remember when they were testing 155mph service in 2010, but they could only do it on tiny sections of the track. Sadly it only really gets faster once they fix the infrastructure like bridges.

No, those are symptoms of an eating disorder.

Here's a sample of running the 120b model on Ollama with my MBP:

```

total duration: 1m14.16469975s

load duration: 56.678959ms

prompt eval count: 3921 token(s)

prompt eval duration: 10.791402416s

prompt eval rate: 363.34 tokens/s

eval count: 2479 token(s)

eval duration: 1m3.284597459s

eval rate: 39.17 tokens/s

```


I was extremely disappointed that they didn't choose "...may be black hole suns" for the title.

This post is a good example of why groundbreaking innovations often come from outsiders. The author's ideas are clearly colored by their particular experiences as an engineering manager or principal engineer in (I'm guessing) large organizations, and don't particularly resonate with me. If this is representative of how engineering managers think we should build AI tooling, AI tools will hit a local maximum based on a particular set of assumptions about how they can be applied to human workflows.

I've spent the last 15 years doing R&D on (non-programmer) domain-expert-augmenting ML applications and have never delivered an application that follows the principles the author outlines. The fact that I have such a different perspective indicates to me that the design space is probably massive and it's far too soon to say that any particular methodology is "backwards." I think the reality is we just don't know at this point what the future holds for AI tooling.


I could of course say one interpretation is that the ml-systems you build have been actively deskilling (or replacing) humans for 15 years.

But I agree that the space is wide enough that different interpretations arise depending on where we stand.

However, I still find it good practice to keep humans (and their knowledge/retrieval) as much in the loop as possible.


I'm not disagreeing that it's good to keep humans in the loop, but the systems I've worked on give domain experts new information they could not get before -- for example, non-invasive in-home elder care monitoring, tracking "mobility" and "wake ups" for doctors without invading patient privacy.

I think at its best, ML models give new data-driven capabilities to decision makers (as in the example above), or make decisions that a human could not due to the latency of human decision-making -- predictive maintenance applications like detecting impending catastrophic failure from subtle fluctuations in electrical signals fall into this category.

I don't think automation inherently "de-skills" humans, but it does change the relative value of certain skills. Coming back to agentic coding, I think we're still in the skeuomorphic phase, and the real breakthroughs will come from leveraging models to do things a human can't. But until we get there, it's all speculation as far as I'm concerned.


I think it really, really depends on the language. I haven't been able to make it work at all for Haskell (it's more likely to generate bullshit tests or remove features than actually solve a problem), but for Python I've been able to have it make a whole (working!) graph database backup program just by giving it an api spec and some instructions like, "only use built in python libraries".

The weirdest part about that is Haskell should be way easier due to the compiler feedback and strong static typing.

What I fear most is that it will have a chilling effect on language diversity: instead of choosing the best language for the job, companies might mandate languages that are known to work well with LLMs. That might mean typescript and python become even more dominant :(.


(user name checks out, nice)

I share similar feelings. I don't want to shit on Python and JS/TS. Those are languages that get stuff done, but they are a local optimum at best. I don't want the whole field to get stuck with what we have today. There surely is place for a new programming language that will be so much better that we will scratch our heads why we ever stuck with what we have today. But when LLMs work "good enough" why even invent a new programming language? And even if that awesome language exists today, why adopt it then? It's frustrating to think about. Even language tooling like static analyzers and linters might get less love now. Although I'm cautiously optimistic, as these tools can feed into LLMs and thus improve how they work. So at least there is an incentive.


I think it's unlikely the next bubble will involve the stock market. I mean the last bubble (real estate) didn't either. It can still be a bubble even if it's mostly VC money going into it, because more companies, endowments, pension funds, and ETFs than ever are exposed through VC. I don't know what the "total VC money invested" graph looks like right now, but even if investment stays constant, the lack of exits would still cumulatively result in a bubble-like inflation over time.


Bubbles happen because they haven't happened before, people know their history and don't repeat the same bubble. So just because there hasn't been a catastrophic stock market bubble in USA before doesn't mean it can't happen, it has happened in other countries and those stocks didn't recover.

Bubbles looks very impressive until they pop, most fall for them, that is why they are bubbles.


These “players” (but I like to think of them as scammers since they add zero economic value) have been playing this game between tech stocks and crypto for 5+ years now. It’s so blatantly obvious but we have no regulators and banks don’t give a damn since they’re making money.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: