Hacker Newsnew | past | comments | ask | show | jobs | submit | dent9's commentslogin

> When you say “LLMs did not fully solve this problem” some people tend to respond with “you’re holding it wrong!” > > I think they’re sometimes right! Interacting with LLMs is a new skill, and it feels pretty weird if you’re used to writing software like it’s 2020. A more talented user of LLMs may have trivially solved this problem.

So one thing I only recently figured out is that using ChatGPT via the web browser chat is massively different from using OpenAI's code-focused Codex model / interface. Once I switched to using Codex (via the VS Code extension + my own ChatGPT subscription) the quality of answers I got improved massively.

So if you're trying to use LLM to help with debug, make sure you're using the right model!! There are apparently massive differences between models of the same generation from the same company


I appreciate the author's work in doing this and writing it all up so nicely. However every time I see someone doing this, I cannot help but wonder why they are not just using SLURM + Nextflow. SLURM can easily cluster the separate computers as worker nodes, and Nextflow can orchestrate the submission of batch jobs to SLURM in a managed pipeline of tasks. The individual tasks to submit to SLURM would be the users's own R scripts (or any script they have). Combine this with Docker containers to execute on the nodes to manage dependencies needed for task execution. And possibly Ansible for the management of the nodes themselves to install the SLURM daemons and packages etc.. Taken together this creates a FAR more portable and system-agnostic and language-agnostic data analysis workflow that can seamlessly scale over as many nodes and data sets as you can shove into it. This is a LOT better than trying to write all this code in R itself that will do the communication and data passing between nodes directly. Its not clear to me that the author actually needs anything like that, and whats worse, I have seen other authors write exactly that in R and end up re-inventing the wheel of implementing parallel compute tasks (in R). Its really not that complicated. 1) write R script that takes a chunk of your data as input, processes it, writes output to some file, 2) use a workflow manager to pass in chunks up the data to discrete parallel task instances of your script / program and submit the tasks as jobs to 3) a hardware-agnostic job scheduler running on your local hardware and/or cloud resources. This is basically the backbone of HPC but it seems like a lot of people "forget" about the 'job scheduler' and 'workflow manager' parts and jump straight to glueing data-analysis code to hardware. Also important to note that most all robust workflow managers such as Nextflow also already include the parts such as "report task completion", "collect task success / failure logs", "report task CPU / memory resource usages", etc.. So that you, the end user, only need to write the parts that implement your data analysis.


Keep doing tech work but work for more meaningful organizations. Look into STEM and science and health fields that need help with their technology. Shifting your career away from tech is a massive mistake. You need to shop your skills to organizations that are more meaningful to you. Non-tech science and health companies and orgs won't pay as well as pure tech or other fields but you get the satisfaction of knowing that your work changes the world for a positive benefit and possibly saves lives


This authors problem isn't GitHub it's the fact that they used Rust when they should have used Go. Never would have had this issue


Adderall XR + to-do lists

For work purposes I keep hand written To Do lists that I re-write every week or so

This is in addition to the teams Jira tickets and Scrum etc

There is no "switching off" your just f-ed


It's always extremely weird to me when people have to make this distinction because I was under the impression that ALL Adderall prescriptions are for the XR extended release which comes in capsules full of small beads of medication. No one should be taking Adderall IR instant release tablets. Those things have almost no reason to exist. I really don't understand who is taking those things and where people are getting the impression that IR tablets are a normal Adderall formulation. "Long acting" Adderall has been the norm for decades now and I haven't even seen or heard of anyone taking IR tablets since like the early 2000's


I was given a prescription for XR. It took me a month or more to even try it, because I’d sleep in, then it would take me a while to get moving, and by that time I was worried it would still be active with the idealist it me would think I should go to bed. Something that didn’t last as long would give me much more control.


Because there is ritalin and concerta, I think it's just easier to understand and explain. Im less familiar with adderall but by searching a few times IR XR and Vyvanse are still prescribed? and adderall ir is also approved for narcolepsy at least.


The XR mechanism is shit compared how vyvanse and concerts manage theirs.

If you’re a slow metaboliser you end up getting a big overlap around 3hours in which can be quite unpleasant, hence the non XR compound


If I take the lowest dose concerta I will not sleep that night no matter how early i take it. This was pretty devestating when I was younger and not mature enough to realize what was going on. Parents were obsessed with ADD medication at the time. I think vyvanse or xr adderall would be a similar nightmare for me.


Interesting.

I had similar issues with XR methylphenidate and Dex, but concerta and vyvanse were find if I took them first thing.

I suspect a bit insignificant amount of people are slow metabolisers and are having a shit run with stimulants as a result


Yes. Never tried the others. stratera - weird side effects quit immediately. ritalin is good. concerta also good... too good. Havent tried it in years Id only guess if anything Im slower if older.


Using LLM for programming work is a skill that takes practice and sometimes a little luck, just like Google searching for help with programming work.

It takes practice to figure out which things the LLM handles well and how best to present your problems to the LLM to get a good result.

It take luck that the specific things you're trying to get results for are things the LLM actually can handle well.


Gotta use Scroll Reverser unfortunately. Sometimes even that breaks though. Sad


Shortcuts.app and AppleScript works for this.


This is a fake issue. You should just be using window snapping which finally is included directly in the os


> I'm sure AI can easily get to the 99%, but does it help with the rest?

Yes the AI can help with 100% is it. But the operator of the AI needs to be able to articulate this to the AI .

I've been in this position, where I had no choice but to use AI to write code to fix bugs in another party's codebase, then PR the changes back to the codebase owners. In this case it was vendor software that we rely on which the vendor hadn't fixed critical bugs in yet. And exactly as you described, my PR ultimately got rejected because even though it fixed the bugs in the immediate sense, it presented other issues due to not integrating with the external frameworks the vendor used for their dev processes. At which point it was just easier for the vendor to fix the software their way instead of accept my PR. But the point is that I could have made the PR correct in the first place, if I as the AI operator had the knowledge needed to articulate these more detailed and nuanced requirements to the AI. Since I didn't have this information then the AI generated code that worked but didn't meet the vendors spec. This type of situation is incredibly easy to fall into and is a good example of why you still need a human at the wheel on projects to set the guidance but you don't necessarily need the human to be writing every line of code.

I don't like the situation much but this is the reality of it. We're basically just code reviewers for AI now


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: