Hacker Newsnew | past | comments | ask | show | jobs | submit | sasmithjr's commentslogin

> .NET is certainly better than Python, but I'm not very happy with the type system and the code organization versus my Rust projects.

Have you given F# a whirl?


You know, I tried F# like eight-ish years ago, and I loved it, but I couldn't break into doing it with enough regularity and depth that it made sense for me. I still do a decent amount of C# at work, and with my experience in Rust (algebraic data types, etc.), I imagine that F# would really help out a lot in our .NET code.


> So there must be some other reason you didn't get revenue?

Right. I think my biggest takeaway from this article is that another engineer discovered that sales and marketing are important and difficult jobs.


You can write a helper method (or use FsToolkit.ErrorHandling[0]) to simplify the F# example to:

    let Foo () = getData() |> Task.map _.value
And it'll be easier to work with the .NET ecosystem if you use the task computation expression[1] over the async one.

[0]: https://github.com/demystifyfp/FsToolkit.ErrorHandling

[1]: https://learn.microsoft.com/en-us/dotnet/fsharp/language-ref...


Isn't that a synchronous call?


For me, AI code generation for F# has been pretty good! There's one annoying thing Opus/Sonnet do (honestly don't remember what other models do): use old syntax for array indexing.

  values.[index] // Old way of indexing required a .
  values[index] // Supported for awhile now
That's the "biggest" issue I run in to, and it's not that big of a deal to me.

Yesterday, it did try to hallucinate a function that doesn't exist; compilation failed, so the agent changed the function to a fold and everything was hunkey-dorey.


I didn't look at the URL at first and was surprised when this turned in to an ad. Oh well!

> Stop selling "unlimited", when you mean "until we change our minds"

The limits don't go in to affect until August 28th, one month from yesterday. Is there an option to buy the Max plan yearly up front? I honestly don't know; I'm on the monthly plan. If there isn't a yearly purchase option, no one is buying unlimited and then getting bait-and-switched without enough time for them to cancel their sub if they don't like the new limits.

> A Different Approach: More AI for Less Money

I think it's really funny that the "different approach" is a limited time offer for credits that expire.

I don't like that the Claude Max limits are opaque, but if I really need pay-per-use, I can always switch to the API. And I'd bet I still get >$200 in API-equivalents from Claude Code once the limits are in place. If not? I'll happily switch somewhere else.

And on the "happily switch somewhere else", I find the "build user dependency" point pretty funny. Yes, I have a few hooks and subagents defined for Claude Code, but I have zero hard dependency on anything Anthropic produces. If another model/tool comes out tomorrow that's better than Claude Code for what I do, I'm jumping ship without a second thought.


TBH paying yearly for ANY LLM tool at this time is just pure insanity.

The field is moving so fast that whatever was best 6 months ago is completely outdated.

And what is top tier today, might be trash in a few months.


The second sentence is revealing. The creation of the 787 didn’t make the 747 “trash”.


You also couldn't just unsubscribe from the 747 and get a 787 a the same price.

Services are not the same thing as physical goods.


True, but the statement sort of implies that the models are intrinsically "trash" today, and we're just waiting for that to be revealed by the next relatively superior model.


What I'm saying without any implications is that there's no point in locking yourself to a yearly contract when the field is moving fast.

You are also paying for a service with no clear SLA or measurable performance indicators. You have no way to determine if what you got at launch is as powerful as what you're getting now. It's all about feels


I don't think it's an exclusive choice between the two, though. I think senior engineers will end up doing both. Looking at GitHub Copilot's agent, it can work asynchronously from the user, so a senior engineer can send it off to work on multiple issues at once while still working on tasks that aren't well suited for the agent.

And really, I think many senior engineers are already doing both in a lot of cases where they're helping guide and teach junior and early mid-level developers.


> And really, I think many senior engineers are already doing both in a lot of cases where they're helping guide and teach junior and early mid-level developers.

Babysitting and correcting automated tools is radically different from mentoring less experienced engineers. First, and most important IMO, there's no relationship. It's entirely impersonal. You become alienated from your fellow humans. I'm reminded of Mark Zuckerberg recently claiming that in the future, most of your "friends" will be A.I. That's not an ideal, it's a damn dystopia.

Moreover, you're not teaching the LLM anything. If the LLMs happen to become better in the future, that's not due to your mentoring. The time you spend reviewing the automatically generated code does not have any productive side effects, doesn't help to "level up" your coworkers/copilots.

Also, since LLMs aren't human, they don't make human mistakes. In some sense, reviewing a human engineer's code is an exercise in mind reading: you can guess what they were thinking, and where they might have overlooked something. But LLMs don't "think" in the same way, and they tend to produce bizarre results and mistakes that a human would never make. Reviewing their code can be a very different, and indeed unpleasant WTF experience.


Guiding and teaching developers is rewarding because human connections are important

I don't mentor juniors because it makes me more productive I mentor juniors because I enjoy watching a human grow and develop and gain expertise

I am reminded of reports that Ian McKellen broke down crying on the set of one of The Hobbit movies because the joy of being an actor for him was nothing like acting on green screen sets delivering lines to a tennis ball on a stick


and just to play devil's advocate maybe some people don't enjoy that? remove the issue of training the next generation for a moment.

just like with open vs. closed offices or remote vs in-person, maybe some people have all the human interaction they want outside of work and don't mind "talking" to some AI as long as it gets shit done in the manner they want.


> and just to play devil's advocate

Your comment would be improved by simply removing that phrase. It adds nothing and in fact detracts.

> just like with open vs. closed offices or remote vs in-person, maybe some people have all the human interaction they want outside of work and don't mind "talking" to some AI as long as it gets shit done in the manner they want.

You're presenting a false dichotomy. If someone doesn't enjoying mentoring juniors, that's fine. They shouldn't have to. But why would one have to choose between mentoring juniors or babysitting LLM agents? How about neither?

sasmithjr was apparently trying to defend babysitting A.I. by making an analogy with mentoring juniors, whereas I replied by arguing that the two are not alike. Whether or not you enjoy using A.I. is an entirely separate issue, independent of mentoring.


> sasmithjr was apparently trying to defend babysitting A.I. by making an analogy with mentoring juniors

I regret adding that last bit to my comment because my main point (which I clearly messed up emphasizing and communicating) is that I think you’re presenting a false dichotomy in the original comment. Now that work can be done with LLMs asynchronously, it’s possible to both write your own code and guide LLMs as they need it when you have down time. And nothing about that requires stopping other functions of the job like mentoring and teaching juniors, either, so you can still build relationships on the job, too.

If having to attend to an LLM in any way makes the job worse for you, I guess we’ll have to agree to disagree. So far, LLMs feel like one of many other automations that I use frequently and haven’t really changed my satisfaction with my job.


> If having to attend to an LLM in any way makes the job worse for you

I think you're downplaying the nightmare scenario, and your own previous comment already suggests a more expansive use of LLM: "so a senior engineer can send it off to work on multiple issues at once".

What I fear, and what I already see happening to an extent, is a top-down corporate mandate to use AI, indeed a mandate to maximize the use of AI in order to maximize (alleged) "productivity". Ultimately, then, senior engineers become glorified babysitters of the AI. It's not about the personal choice of the engineer, just like, as the other commenter mentioned, open vs. closed offices or remote vs. in-person are often not the choice of individual engineers but rather a top-down corporate mandate.

We've already seen a corporate backlash against remote work and a widespread top-down demand for RTO. That's real; it's happened and is happening.


i was trying to frame it as something i'm also grappling with, but i digress poor choice of words.

maybe you're responding to the wrong person? because i'm not even disagreeing with you on that. maybe they want both or neither, that's fine.

the person i'm responding to is framing mentoring as some kind must have from a "socialization" standpoint (which i disagreed with, but i get the practical aspect of it where if you don't have people train juniors there won't be seniors).


No, not "socialization" as in "having social interactions with other people"

I mean "socialization" as in "being a positive part of and building a society worth living in"


and why do you think that has to exist solely within the confines of work? not that you said that, but your comments seem to suggest that if you don't like or want to mentor junior devs then you don't value human connections. thus my comment about having enough connections outside of work.

if it's rewarding to you that's great, but don't frame it as something bigger than it is. i would hope we are all "being a positive part of and building a society worth living in" in our own way.


> if you don't like or want to mentor junior devs then you don't value human connections

If you don't like or want to mentor the younger generation then you are actively sabotaging the future of society because those people are the future of society

Why do I care about the future of society? Because I still have to live in it for another few decades


alright, we can agree to disagree because this so obviously touching a chord with you and you're now literally making sweeping assumptions based on things i've never said.

maybe i like taking care of my friends kids, volunteering, or doing other things that contribute to the "future of society"? personally, i think mentoring junior devs is slightly lower on the priority list, but that's my opinion.

seriously, how arrogant of you to make assumptions about how others think about the future based on a tiny slice of your personal life lol.


> maybe i like taking care of my friends kids, volunteering, or doing other things that contribute to the "future of society

That's great, that doesn't absolve you of your responsibility to also mentor juniors at work though

Those are different tasks in different worlds and they all need doing


nice deflection. i might not share your enthusiasm for mentoring junior devs, but i do it anyway because like you, i agree it's important. the point, though, is at the end of the day even if i didn't do it you have no fucking right to come with that moral high ground.

if you've optimized every facet of your life to do all the "responsible things" society needs then feel free to throw the first stone. anything else is just posturing.

and just a small thing, it's ironic that you're so fixated on socialization for society’s sake while being so tunnel-visioned in defending your own definition of what that even means. i've given you plenty examples but it just doesn't fit the one you personally adhere to.


> maybe some people have all the human interaction they want outside of work and don't mind "talking" to some AI as long as it gets shit done in the manner they want

This isn't about satisfying a person's need for socializing it is about satisfying society's need for well socialized people

You can prefer closed offices and still be a well socialized person

You can prefer remote work and still be a well socialized person

You can even prefer working alone and still be a well socialized person

If you are in favor of replacing all humans with machines, you are pretty much by definition an asocial person and society should reject you


you're making a strawman here. it was never black and white and i never advocated all humans being replaced with machines so we have zero interaction with each other.

every technological push has been to automate more and more and every time that's happened we've reduced socialization to some extent or changed the nature of it (social media anyone? and yes, this also has everything to do with remote vs in-person, etc, all which pull the lever on what level of socialization is acceptable).

just because it doesn't fit your particular brand doesn't mean it's wrong, and it's clear this is pushing on your line where you find it unacceptable. i could just as well argue that people who do not show up to an in-person office are not "socialized" to the degree society needs them to be.

the debate has always been to what degree is this acceptable.


> [Dugan allegedly] escorted the them through a private back door to avoid arrest.

According to the complaint [0] on page 11, Flores-Ruiz still ended up in a public hallway and was observed by one of the agents. They just didn't catch him before he was able to use the elevator.

INAL but I don't think "Dugan let Flores-Ruiz use a different door to get to the elevator than ICE expected" should be illegal.

[0]: https://static01.nyt.com/newsgraphics/documenttools/3d022b74...


The outcomes are immaterial to the legal question of obstruction, the only factors are knowledge of the warrant and intent to help him escape. If he successfully avoided arrest but it cannot be proven that the judge intended that outcome, then she is not guilty of obstruction. If he got caught anyway but the judge intended to help him escape, that's still obstruction.

https://www.law.cornell.edu/wex/obstruction_of_justice


As noted by the linked page those are minimum requirements. The relevant law regarding obstruction [0] is USC 18 §1505 [1]. It isn't immediately obvious to me that it was violated.

The first paragraph only appears to apply to physical evidence. The second paragraph appears to require more than merely assisting someone.

> Whoever corruptly, or by threats or force, or by any threatening letter or communication

The latter two obviously don't apply so that only leaves the former. Did the judge act "corruptly"?

The other law cited in the complaint is USC 18 §1071 [2] and the question would be if leading someone to an alternate pathway constitutes either harboring or concealing the individual. I don't feel like letting someone out my backdoor constitutes "concealing" a person as the term is commonly used. As an example, hiding someone in a closet and then telling the officers that he isn't in the building would obviously qualify.

[0] https://static01.nyt.com/newsgraphics/documenttools/3d022b74...

[1] https://www.law.cornell.edu/uscode/text/18/1505

[2] https://www.law.cornell.edu/uscode/text/18/1071


The only thing that might be tricky about FizzBuzz is if the person doesn't know about the modulo operator. I can't remember the last time I used it in production code; I use it far more thinking about FizzBuzz than I do anywhere else.


If I didn't have the modulo operator, I would just check whether division resulted in a round number. I wouldn't rate someone who understands this basic principle lower than someone who just happens to know about the modulo operator.

Bonus points if you use Python and demonstrate that you know the difference between the / operator and the // operator. That's much more useful in day-to-day work.


It's worth noting that the original FizzBuzz problem is not "given a number n, print..." but rather "for the first 100 natural numbers, print...". In this formulation, you don't need the modulo operator: you just initialize two counters at 3 and 5, respectively, decrement them each time, and react when either or both reach 0.


I agree people should be able to design things property, but I'm not sure this ramp is actually a good example. It might be! But no one is talking about an obvious issue for any ramp that would exist in that photo: it is merging bikes in to pedestrian traffic. So I'd think that you specifically want a ramp that forces the bike to slow down.


For another perspective, if you put 22.5lbs on each side of the bar, you wouldn't call that a "1 plate" lift even though both sides add up to 45 lbs. There are no plates involved in the lift, thus it's not a "1 plate" lift.

If you search "1 plate overhead press" or "1 plate ohp", you'll see many references to how that's a 135lb lift.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: