Hacker Newsnew | past | comments | ask | show | jobs | submit | frumiousirc's commentslogin

Perhaps you are thinking of megahal https://homepage.kranzky.com/megahal/Index.html or if a bit later in the millennium, cobe https://teichman.org/blog/

ah, probably so, looks like there were eggdrop scripts for megahal, thanks!

I'm curious how the cost of performing these CT scans compared to the profit reaped by Haribo while the batteries were selling.


Lumafield sells CT scanners, so these posts serve double duty as advertising for their capabilities. Given how many times their previous posts have been shared I'm sure the ROI is great.


This is basically good marketing content for Lumafield that sell the CT scanners. Cost to them is almost nothing, just opportunity cost of doing something else on the tool.


> more examples in that thread

Some supposition: A Fourier amplitude image should show that pattern as peaks at a certain angle/radius location. The exact location may be part of the identification scheme. Running peak finding on the Fourier image and then zeroing out the frequencies in the peak should remove the pattern. Modeling the shape of the peak would allow mimicking the application of a legit SynthID signature.

If anyone tries/tried this already, I'd love to see the results.


> I use wired headphones to study with Anki (AnkiDroid) because I've found most (inexpensive) Bluetooth headphones require a second or two to begin playing.

1-2 seconds is an eon for audio latency so I guess something else is going on than anything BT related in the headphones. Unless you have particularly bad luck in what headphones you use.

FWIW, I use a variety of cheap and not so cheap BT headphones across multiple devices and apps including AnkiDroid and have not perceived any latency.

If switching to wired removes the latency then it does seem to indicate something in the BT stack of your device. I wonder if you experience the lag when using AnkiDroid + BT on another device.


Thank you. I actually have since switched devices, but have not yet tested on the new device. The old device was a flagship phone, the Note 10 Lite. That phone served me well for four years, I'll test on the S24 Ultra that just replaced it. Thank you.


The lack of proper indentation (which you noted) in the Python fib() examples was even more apparent. The fact that both AIs you tested failed in the same way is interesting. I've not played with image generation, is this type of failure endemic?


My hunch in that case is that the composition of the image implied left-justified text which overwrote the indentation rule.



deepwiki doesn't spider. Repos are indexed upon request. The request dialog accepts a non-github URL.


Now I'm wondering who requested my repo lmao.


I have a fairly large code base that has been developed over a decade that deepwiki has indexed. The results are mixed but how they are mixed gives me some insight into deepwiki's usefulness.

The code base has a lot of documentation in the form of many individual text files. Each describe some isolated aspect of the code in dense, info-rich and not entirely easily consumable (by humans) detail. As numerous as these docs are, the code has many more aspects that lack explicit documentation. And there is a general lack of high-level documentation that tie each isolated doc into some cohesive whole.

I formed a few conclusions about the deepwiki-generated content: First, it is really good where it regurgitates information from the code docs while being rather bad or simply missing for aspects not covered by the provided docs. Second, deepwiki is so-so for providing a high layer of documentation that sort of ties things together. Third, it is highly biased about the importance of various aspects by their code docs coverage.

The lessons I take from this are: deepwiki does better ingesting narrative than code. I can spend less effort on polishing individual documentation (not worrying about how easy it is for humans to absorb). I should instead spend that effort to fill in gaps, both details and to provide higher-level layers of narrative to unify the detailed documentation. I don't need to spend effort on making that unification explicit via sectioning, linking, ordering, etc as one may expect for a "manual" with a table of contents.

In short, I can interpret deepwiki's failings as identifying gaps that need filling by humans while leaning on deepwiki (or similar) to provide polish and some gap putty.


If documenting the why rather than the how you often end up tying high level concepts together.

E.g. If you describe how the user service exists you wont necessarily capture where it is used.

If you document why the user service exists you will often mention who or what needs it to exist, the thing that gives it a purpose. Do this throughout and everything ends up tied together at a higher level.


This blog post has a lot of good ways to think about traffic.

There is on particular phenomenon I ponder about my commute to work on 45 MPH "stroads" involves the interplay between speeders, slugs and the many stop lights.

I strictly keep to the speed limit during day light and good weather (slower otherwise) and start slowing well in advance to an oncoming red followed by accelerating briskly when red turns to green if not blocked by other users of the road.

The vast majority of the other users have the opposite speed profile. They go well above the speed limit (60+ is not uncommon to see), often passing me at the last second before safely or not so safely stopping at a red and then take their sweet time getting up to the limit (and then beyond) after a green. The fact that most of them drive enormous apartments on wheels perhaps explains some of this behavior.

The main hypothesis I am interested in is that their strategy of speeding to the next red light and lazily getting going at green (if they notice the light change) is actually counter productive to throughput and maximizing average speed. The speeding and the bunching at red coupled with glacial acceleration up to and beyond the limit is far slower than keeping to the speed limit, gradual slowing down (sometimes catching red->green before stopping) and brisk speed up is the winner, assuming not blocked by lumbering behemoths.

That is, stopping is slower than speeding is fast.


It depends on how long they're above the speed limit and you're below it.

If you want to win the race, max acceleration, max speed, max deceleration. Anything else is sub optimal.


> then take their sweet time getting up to the limit (and then beyond) after a green

I know this is bad for my fuel/electric efficiency, but I enjoy being the first car at the stop bar during a red light because of this. Means I can accelerate faster and merge lanes without waiting for other drivers to make a spot, even if those other cars ultimately end up passing me a mile later.


> Have you considered a network manager?

I've used "nmtui" on Linux for many years to do this. "nm" = "Network Manager".


Seconded nmtui.

The bluetui author also has impala, which is a tui for the network manager. But in this case, nmtui is good enough.


Thanks, this sounds like what I need ;)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: