On the other hand, if people who don't care enough to compile it for themselves try it out, the Python devs can be flooded with complaints and bug reports that effectively come down to it being in alpha.
You get both sides (yes, you might limit some who would otherwise try it out).
I think requiring people to compile to try out such a still-fraught, alpha-level feature isn't too onerous. (And that's only from official sources; third parties can offer compiled versions to their hearts' content!)
For me this is a set of general strategies for breaking down problems. Here are some I use. (Apologies if these aren't all orthogonal to one another; they just feel different when I'm thinking of how to break a problem down.)
1. Break down the steps. Can you find a recipe of steps for achieving the thing? Then start with the first step. Maybe that's a small enough task. Maybe you don't have to perform all steps in order, and you can find a small-enough step to do next.
2. Isolate the fundamental challenges. There is often a tough nut to crack within the problem. Can you isolate that from the rest of the project, and turn it into its own thing (I like to cast this as a "toy" problem)? When I say "isolate," I mean to remove all unnecessary complexity to getting at the fundamental issue. Suppose I want to figure out how to create a robust messaging network. There might be user interfaces and caching and different kinds of messages and different networks and different failure mechanisms and performance issues and ... So just create a "toy" at each step: First, simply send & receive a message. Don't worry about performance or worry much about robustness. You now have a small task but whose completion achieves a fundamentally necessary part of the larger task. Finishing that will feel good--you have something that works!--and you've made real progress. You might find examples of others doing something similar to this basic task as well, so you can work on your own but then compare notes to others to gain insights on why others have solved similar problems differently than how you solved it (you might have come to something better, or not; either way, you now have understanding of the fundamental problems involved). Now you can grow that toy or take what you learned from the toy and apply it to the larger task.
3. Similar to 2, but maybe a different POV: The physics joke is approximating a cow as a perfect sphere to study its dynamics. Simply the hell out of a problem! Maybe it feels ridiculously simple. Fine; now you are working with something completely tractable. You can then add in complexity to your model one wrinkle at a time.
4. Do something that's actually easy even if i might not be "significant" from the "big challenges to getting this project working" POV. Maybe you've been frustrated for a week or two trying to solve the tough-nut-to-crack bit of the problem. Even your toy problem remains (what feels hopelessly) broken! Switch over to creating the GUI or something superficial but that is easily tractable yet yields something satisfying to you when you finish. Simply stepping away from the hard problem for a day or two can re-motivate you when you come back to the hard problem. That time can also give your mind time to process solutions in the background (many people--myself included--have an "a ha!" moment when not thinking directly about a hard problem). And you are still being productive, moving towards the end goal. You had to make a GUI anyway at some point. Might as well be when you are stuck on the hard thing and feeling frustrated.
Getting good at breaking down problems took me many years. I credit my physics education as being particularly helpful (training thinking of problems & solutions in their extremes and always connecting solutions back to "does it make sense"). But much of the above is also learning my own psychology of how I work and what/when/how I am motivated to work and in the best position psychologically to solve a problem. I expect this isn't too different for many people, but the details can vary from person to person.
Thank you for the extensive explanation. The problem I mentioned starts rather earlier: say I have X big tasks. I need to split them (applying your described process or otherwise) into smaller tasks. BUT now I'm looking at X x2 tasks: the original X ones, each one getting another task of splitting it into smaller ones. The whole stack becomes only more overwhelming like this...
A big part of the idea with my system is that you only identify 5 tasks at a time. Anything more then that and it becomes overwhelming. So the idea is to peel off the first 5 actionable tasks from your project(s), deal with those before thinking further about the project.
Yes this implies having a general sense of how to accomplish the project and the tasks involved, but no it does not mean you need to have a master plan with every step mapped out. Every 5 steps you get to re-assess and course correct.
I welcome this question from interviewees and sometimes offer up the information without being asked.
I work in a small business where we do hardware, software, help the marketing folks, and do a little IT work where needed. I want someone who is curious, energetic, and enjoys taking on whatever challenge presents itself. They'll start in a pretty well-defined role in a well-defined domain, and I'll give them support in that role. But they will have every opportunity to branch out from there, and I believe the kind of employee I seek—as well as the company—will benefit if the employee fits this technical culture. I want to scare off people who want to be pigeon-holed and fed repetitive tasks.
To that end, I also like to discuss with candidates projects they've worked on in the past, rather than offer up new challenges I present to them. Our normal work week doesn't involve isolated puzzles or single activities that one finishes in an hour. Finishing a project takes a long time & requires acquiring new knowledge, skills, and understanding, so I want to explore in depth something the candidate had a long time to work on where this process did (or did not) transpire.
My POV is that I want to find a postdoc (or someone who could grow into this paradigm), not a clever parrot.
You act like you were misled, but the article, within the first few sentences, says he realized the tools are available to do this (including naming tesseract.js explicitly!), he just needed to glue them together. Then he details how he does that, and only then mentions he used an LLM to help him in that process. The author's article title is equally not misleading.
Was an earlier headline or subtitle here on HN what was misleading, but then that was changed to not be misleading?
64 bit unsigned integer nanoseconds gets you out to 584 years (that's the year 2554 if you're using the Unix epoch). That's good enough for me to use universally for passing times around in the internals of my code. User input and output are going to and from that representation.
Half as many, of course, if you use a signed integer. If you don't need nanoseconds, then use microseconds and you get 292 thousand years to work with.
Integers are just a bit easier than floats for timestamps in my experience (e.g., comparing floats to one another is fraught and you'll be fighting this at every turn in your code).
You have standardized on int64 = nanoseconds. Libraries you use might have standardized on int64 = milliseconds, int64 = seconds, double = seconds, or the preferred DateTime class/struct of your programming language — even the C standard library has `struct tm` [0].
If you’ve wrapped your int64 in some struct/class/type-alias-without-automatic-downcasting, it might be fine. But if you haven’t, you might end up mixing the different scales, or littering the code with pointless conversions to and from the standard DateTime class/struct.
A couple big things: Fortran natively performs operations on arrays directly like Matlab or Numpy in Python (Matlab was originally a REPL-style front-end to Fortan), and Fortran compilers tend to yield quite fast code (though specific cases will have another language outperform Fortran).
That website/community was created in part by the original author of the Python Sympy library, Ondřej Čertík. He is also working on his own Fortran compiler that you can use via webassembly to play around with Fortran; find links if you want to play here: https://lfortran.org
I've only dabbled a little, but I like the general idea, and I appreciate a F/OSS Fortran compiler being developed like this alongside actively seeking to grow the Fortran community & push the language & its libraries forward.
I expect more widespread adoption of Fortran to be quite a ways out, but what Ondřej is doing for Fortran is necessary (not sufficient) for such adoption to be the case.
Did you evaluate Cython? I'm not anti-Julia, but I like that my Cython code is useable out of the box from Python, with no wrapping, and then users can continue to use their Jupiter + Python scripting workflows with performant bespoke modules complemented by the full Python ecosystem.
Someday I'll do a project in Julia. But for some such projects, Rust seems fully guaranteed to be performant while Julia might or might not be, so I might still lean towards Rust (unless one of the high quality packages of Julia removes a lot of development time, which is a decent possibility).
I used it for another project but was not impressed. When I tested it, the documentation was scarce, and the deployment is harder than Julia because it needs a working C compiler alongside the CPython interpreter.
Marketing is a helluva drug. People are trained to trust big companies and the bigger the company the more they trust it. I think the issue is the variance in solutions if you compare all the homegrown solutions against all the big company solutions. And that gets managers into the mindset that there's less risk in a generic big company solution. But of course that misses all the details for a specific case like yours.
It would be like interacting with other people who knew based on the average of everyone on Earth as opposed to dealing with the person you knew based on their own personality.
But it's easy to manage based on these kinds of broad generalities and, as the saying at least once went, no one gets fired for choosing IBM.
In this case, the third party company is tiny. And they keep screwing up. Even something as simple as right to left language translations they can't get right, even when we walked them through how it should work and shared code with them.
People use a great tool in a poor way and then broadly condemn the tool.
And any tool that is sufficiently flexible to be broadly useful can be used in very poor ways.
Jupyter is great, it gets me over the barrier potential for starting a task every time. I build and prove out an algorithm/task piece by piece. Once I'm happy, I move the meat of it to a function in a .py file, and move the code I used to test the algorithm to a unit test function. Delete the duplicated bits and replace with imports, and then what remains is a tutorial/demonstrator notebook using the function I wrote and maybe some nice plots to go along with that, that I wouldn't put in a unit test (nor that show up in docstrings). This can be converted to sphinx docs if the code gets big enough.
What a great tool for incrementally building software! In my world, I build brick by brick, not all at once. Jupyter is a key to that process.
You get both sides (yes, you might limit some who would otherwise try it out).
I think requiring people to compile to try out such a still-fraught, alpha-level feature isn't too onerous. (And that's only from official sources; third parties can offer compiled versions to their hearts' content!)