Hacker News new | past | comments | ask | show | jobs | submit | 0xdky's comments login

I worked on a POC porting it to Windows and had to port GNU tar to support large file names. TLA was truly revolutionary!


Similarly, I managed a Solaris build of arch for a while. arch/larch/arx/tla/baz was pretty cool. But Tom was in denial about its short comings. Particularly speed.


You will need a full repository in other directory to checkout and this will waste storage unless you share objects via alternate.


Another lesson I learned from almost 2+ decades of using Emacs:

Track your customizations in a version control. Using customize package and reading the diff in dot Emacs file has taught me quite a bit about certain aspects of the package.

Side note: It would be awesome if Emacs could do the versioning as part of saving the customizations - build Emacs with libgit2 and make it a native git client.


The customize interface is quite useful. It's probably more adequate if you don't have a huge intricate init. With time, custom.el (or init.el if you keep everything in the same file) can get difficult to manage. When that happen, a lot of people opt for something more readable and human friendly, like use-package or a distribution such as doom-emacs.

The author mentions that some things can be hard to do outside of the customize interface. That is true. When that happens, I just use the interface and then move the setting from custom.el to use-package. But that's rare, I can't remember the last time I did that.


Yes. Also, don't put your own stuff in init.el. Put your own stuff in another file, and load it with load-file from init.el. This massively decreases the chances that Emacs will make a mess when it adds or updates its own stuff.


You could probably write it. Emacs is able to track which settings you've _applied_ but not yet saved; that could serve as the linchpin for generating versioned changes.


for a reason I forgot my .emacs.d/ ("hidden" dir) is a symlink to emacsdotd/ (not hidden) and, yup, it's in git. Some are going to say: "why stop there!? version your entire user dir" and I can see the appeal (but haven't tried yet).

I'm one of those who track everything Emacs: including all the packages. When I upgrade packages, I first check that nothing broke then I commit "Bump avy to 20220114" or whatever. This way I can easily share the exact same config on several machines/several user accounts and I know I can easily roll back to a known fully working setup. YMMV.


If by "version your entire user dir" you mean "version all your manually edited configs", that can work.

Init a git repo in your $HOME, add a .gitignore that by default excludes everything then manually add just the files and folders you want to track.

Especially if you want to publish the repo, be careful, because it's <<extremely>> easy to accidentally expose confidential stuff.


Or just use something like GNU stow or chezmoi.


I hope there will be student towns with mostly students taking online courses for the camaraderie and learning by helping out each other.

Apart from the labs, I do not see a reason to go to college to learn.


> I hope there will be student towns with mostly students taking online courses for the camaraderie and learning by helping out each other.

You just described a campus.


I froze for a moment seeing this article after having worked at a major anti-virus company long time back and used some low level Win32 APIs.

Fortunately, I followed some of the techniques from “Programming Applications for Microsoft Windows” book and Detours project to intercept and execute custom code mostly based on loading custom DLL in target remote process and using DllMain() to execute.


I had a colleague that wrote a tight loop creating objects in Java and in C++ on heap to show Java outperformed C++.

Once I implemented a simple pool allocator in C++, it ran circles around Java.


Did you also implement object pooling for the Java variant (commonly used in high perf apps)?

Something tells me that didnt happen because you saw "cpp running circles around Java" so you got the result you wanted and just stopped there.

If you did, you wouldn't be making this comment.


> Did you also implement object pooling for the Java variant (commonly used in high perf apps)?

In the specific case I don't think you need to; I've seen generated code (from java sources) simply reuse an object in a tight loop. IOW, it doesn't allocate new memory for the instance within a loop, for each invocation of the loop. The memory for the instance is allocated once and then reused.

(For a small allocation (a small instance) I would expect a smart compiler to not allocate anything and simply create the for-loop instance on the stack).


The optimization you are getting at has not much to do with object size, but subsequent usage. If the object reference escapes, it has to be allocated on the heap. Value semantics could/will help here.


> The optimization you are getting at has not much to do with object size, but subsequent usage.

Size plays a part: it determines whether or not an instance first gets allocated on the heap or the stack[1]. Heap allocation gets expensive in a tight loop.

> If the object reference escapes, it has to be allocated on the heap. Value semantics could/will help here.

The assumption is that we are talking about local-only data objects (not returned or outliving the scope). Forgive (and correct) me if I am under the incorrect assumption.

[1] I'd expect a smart compiler to do this: a data object that requires 1MB should at no point be on the stack, while a data object that requires 32 bytes has no business starting the allocator, causing a context switch to the kernel that faults a new page. The specific thresholds are dependent on the runtime and OS support.


For these sorts of micro-benchmarks you can usually attain interesting results with Java. I once constructed specifically designed to show how the JVM's ability to online and un-inline code could make certain programs much faster than C.


Google search on iOS Safari prompts me to install Google app for every search. This alone drove me nuts and I switched to DDG.


I did something similar (~2015) but using the kernel NFS client and having multiple mounts to the same volume using different IP addresses.

Using vectored IO and spreading across multiple connections greatly improved throughout. However, metadata operations cannot be parallelized easily without application side changes.

In more modern kernels, NFS supports ‘nconnect’ mount option to open multiple network connections for a single mount. I wonder if the approach of using libnfs for multiple connections is even required.

https://github.com/0xdky/iotrap


Wonder if the current hiring and rewarding contributes to over engineering?

How would you look if you just invoked APIs to get your work done versus designed and implemented an end to end solution.

Many times, I have seen over engineered solutions come from promotion attached initiatives.


I used BDB as a Win32 profiler backend. The profiler was lightweight and would write a flat file with profiling data and function addresses to keep the captured data small.

A post processing tool would read the profiler data and create a BDB file with support for extracting call graphs and topN sort of analysis.

The final GUI was implemented in Visual Basic since other developers would not use the TUI/CLI based tools in console.

The next project used BDB to store file system metadata on embedded NAS storage. We implemented a fast ‘find’ like service based on file metadata (stat fields) stored in BDB with support for user defined file metadata.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: