After a million years of testing software in prod, after collapsing banks, exploding spacecrafts, huge bugs and other software collapse, the systems that will have naturally survived might be those designed in TDD.
Which begs the question: If that’s true, why didn’t nature use TDD? If it doesn’t, maybe it is not the right design?
Same here. I had the same problem especially when dealing with problems that required me to go back and (un)learn stuff from scratch.
I found that the greatest barrier to deal with seemingly hard concepts was getting the right intuition on what "might" be happening under the hood. For me, this is absolutely crucial even if my intuition about a "thing" might not be completely correct.
As someone who didn't go to a classroom to be taught a lot of the basics, I dive in head-first to read papers, I mean a lot of papers, to help build this mental model EVEN though I don't understand everything in it, but it lets you pick up patterns for solving similar problems you might be having.
For things you don't understand, you can always fit in these gaps in your mental model as you dig deeper into the abstraction cake and repeat the whole "intuition" building loop.
I find that the same also applies to Math and other domains which build on a bunch of primitives that has to be really well known to be able to reason in higher abstractions.
I feel software is no different... dig deep enough, there will be a hashmap of some sort.
`uncomplicate` libraries are awesome. Thanks so much for all this write up and putting out one of my favorite pieces of open source code.
It's just so easy to rapidly iterate when it comes to doing experiments on the GPU with Clojure and these abstractions. It just feels right
I had to deal with similar memory corruption problems but I could just piggy back on Clojure's concurrency primitives to keep this under control without pulling my hair out with all the ugly boiler plate and having that sweet sweet repl on the side.
`(with-release...)`? Oh yes! Please.
<end-rant> The sad part, like all Clojure code I have written in the past, I am having a more social problem of convincing people in my team to use tools like these to make their lives easier, especially when you can quantify the productivity gained. People now see me as the lone crazy lisp guy in the corner and is making me question a lot of absurdities in the software industry. </end-rant>
Any HA features? Sharding to look out in 2.0? Or is the general idea to set streaming relays of influxdb tsm's and treat HA as an L7 proxy routing problem (shadow metric traffic using envoy for example). How do people handle this in their production setups. Curious to know.
It would be cool if the query engine could talk to multiple shards spanning multiple machines for dealing with high cardinality series.
Right now we’re prioritizing work on the single open source server and our cloud service, which has a very different design. Flux will be able to query multiple servers and combine their results (in OSS), but that would be a building block for some HA or clustering.
So you could certainly layer in your own HA solution. We’re still working out what if any clustered for federated features will exist in open source.
Same here! Trawling around forums and mailing lists to try and get Beryl working on my old desktop with a cheap-ass SiS integrated motherboard was quite a learning experience. Dead-ends all around.
Also learned what hardware acceleration was all about and that my motherboard's VGA chip specifically didn't support it. The first thing I did after saving up for a new laptop was install gentoo+beryl+gnome on it and rotate my desktop cube in all its hardware accelerated glory. Fun times!
Yesterday, as I was taking a left turn with my motorcycle on a big intersection with flashing red signals, I couldn't help but wonder if I'd be able to communicate with self driving cars that it's my turn. It was chaotic and required some level of consensus among the vehicles involved.
I had to deal with fully self-driving cars in SF a couple times and i actually became more aggressive about my right-of-way ("well, it's not going to hit me" i thought).
How does Timescale solve the problem of retention. In InfluxDB, old data is thrown out at every tick as the retention window continuously rolls. In the world of Postgres, wouldn't this mean an explicit cron-like DELETE of rows all the time?
I was looking for a new phone right before the OpenMoko was supposed to ship, and so I pointedly switched carriers to AT&T just because they were rumored to be the carrier where it was easiest to get an OpenMoko working. I bought a Motorola Atrix to use "until I can get an OpenMoko". Sadly, I never did get my hands on one of them. I don't even remember why now. Either they never shipped in volume at all, or the price was never within reach, or something. Anyway, I've been waiting a long time for a truly open smart-phone... and I'm still waiting. sigh
Mickey was my mentor and didn't get a chance to directly work with Sean and Harold. My project was to build APIs in Vala to control the hardware (volume, screen brightness, etc) through DBus. Was a great learning experience but could see the ominous signs towards the end of my summer.
Got a device to hack on and the fact that I could SSH into a phone and have a shell to goof around was exciting back then. Android was just announced IIRC and getting it work on Freerunner was so much fun.