Hacker Newsnew | past | comments | ask | show | jobs | submit | spmvg's commentslogin

Thanks all already for the responses and feedback. I posted this to HN a few days ago and I noticed it getting a little traction today. I really appreciate it!

It's way past bedtime here, going to get some sleep and will check later again.


Thanks for your feedback! I'm not very proud of the data input in the sidebar as well (especially when there is also a scrollbar).

Let me illustrate a bit what my motivation was:

Q: What is the goal of the application?

A: The goal of this application for me personally is to communicate the impact of IT maintenance towards product managers and the rest of the team, in a language that they understand (which is table and architecture diagram).

Q: What is the benefit of having the diagram?

A: Personally, a diagram helps me to understand things better than a big Excel sheet would.

Q: Maybe I'm misunderstanding the purpose but to me it seems like the main output from this software is the "maintenance table", but how do the pointers/diagram help instead of just having a dropdown in the table?

A: You're exactly getting it. The goal is the "maintenance table". I draw the IT architecture diagram (which is how I think about IT in my head), then add periodic tasks to them (maintenance plans), and then it renders the maintenance table automatically. I guess you could also just make a big Excel sheet?


Thanks!

I like your suggestion. I was doubting a lot between hours and days and eventually settled on days because I noticed product managers (in bigco) making rough estimates based on "fractions of FTEs over a quarter", so days are granular enough for me to get my point across, but maybe people prefer to calculate in hours... I think it shouldn't be so hard to make hours/days configurable in the "Effort" modal.

Dump of related thoughts: - Should hours vs. days be a global setting? Or per maintenance plan? - How many hours are in a day? 8? More? Another config? - People can put fractions of a day (like 0.125). Maybe this is fine already?


I think Joels has it right in his assessment of hours vs days: https://www.joelonsoftware.com/2000/03/29/painless-software-...

In another piece somewhere he also states, that the engineer working on the thing should estimate the hours, because everyone has a different skillset and that then relates to the task and how long it takes.


This is a really good find, thanks for that. It's from 2000 and still relevant. Added an issue to support hours: https://github.com/spmvg/open_it_maintenance_planner/issues/...


Yes I didn't focus on mobile at all, and I notice it renders but drag-and-drop is broken, argh... I think I underestimated how many people would be checking this out on mobile.


:)


Half the fun of this project has been doing it in Python and performance hasn't been an issue so far, which says something about how fast Python is already. And indeed native would be ~50x-100x faster.

I must defend the design choice for the dictionary of arrays though, this has been a very conscious choice:

  - The "dictionary-of-arrays" approach allows lookups in constant time O(1), irrespectively of how much data has already been stored (compared to one big array)
  - The dictionary structure allows me to throw away data in the middle easily (without having to handle growing arrays), because the "dictionary-of-arrays" has already been chunked. The audio looper will use only some parts of the recorded audio, leaving big parts in between unused.


Cython and related tools will be the direction to take if performance becomes a bottleneck indeed! Interestingly enough, the audio callback is being handled fast enough on my not-impressive laptop and CPU usage has been low on average (<1%), so Python trickery hasn't been needed yet


Indeed, it is not a guarantee that the "sleep" will be exactly that long. In the code I'm not "sleeping" in any sensitive places, instead I'm relying on the callback to the audio stream object, which just needs to finish before the next one starts (less of a timing constraint).


Interesting comment! I'm going to figure out if using another driver allows me to get under 20 ms in latency. Right now I'm measuring around 300 ms in latency round-trip, which is not a problem because I can correct for it. (I'm using a Focusrite Scarlett 2i2 with default drivers.)

The reasoning behind my comment about round-trip time was as follows:

  - Right now I'm measuring around 300 ms round-trip time, without processing inbetween
  - In the past I've tried to do live effects in Ableton with ASIO drivers (guitar in -> Ableton effects -> out), and the delay was too noticable. I couldn't play that way without making my ears bleed and I've switched back to pedals since.
One follow up: how could I achieve a total round-trip latency of around 10 ms total, as you describe? If I use a buffer of 500 samples @ 44.1 kHz, then I am spending already 11 ms just filling the buffer. So then the buffers need to become really small, causing more processing overhead, right? Not sure if this is the way to go.


Yeah, your Scarlett should be capable of single-digit ms latency. If you're on Windows, you need to install its ASIO drivers and figure out how to use them from Python. Then, yes, use tiny buffers and run your audio processing very fast - which is where Python's slowness will probably become a real problem.

10ms latency is how long sound takes to travel 3-and-a-bit metres. So if your amp is a few metres from you, you would experience that delay between hitting the guitar strings and hearing the amplified sound. This should barely be noticeable. If you were noticing a delay greater than that in your Ableton effects setup, your settings needed tweaked. All of this is completely possible - I had a PC-based electronic drum setup in 2006, running through the Reason DAW, which had 8ms latency between hitting a pad and hearing the result.

Hmm, I wonder if Cython (static Python-to-C compiler) would make writing audio code easier/more possible?


With Ableton and the default ASIO configuration on my Scarlett I get 96 ms combined input+output latency without any processing in between, so that's probably what made my ears bleed before. Tweaking the sample rate and buffer size gets me indeed single digit latencies in Ableton. So I'm definitely going to adjust the section about latency, thanks for this!

I'm a bit on the fence about what this means for the difficult latency calibration routine in the application. Ideally I could throw the calibration routine away, but then I require that users have ASIO installed, while the app now also works with non-ASIO drivers. And indeed Python itself might become a bottleneck (making this work in Python has been half the fun).


Even without ASIO you should be able to hit 40 ms latency on pretty much any Windows audio hardware, including motherboard built-in.

If you get 300 ms you're doing something wrong. Note that Windows has multiple audio APIs, 300 ms is about the latency of the old MME api, you need to use the newer one, WASAPI.


I apparently only have the old Windows MME drivers indeed (and ASIO, on Win10). Need to look into why I can't find WASAPI and if I can assume other Windows users have those by default.


WASAPI has been available since Windows Vista. It isn’t its own set of drivers but rather a unifying layer for the WDM driver and the preceding mishmash of Windows audio APIs (MME, DirectAudio, etc). WASAPI supports low ish latencies with Exclusive Mode and then something like 10ms buffering in Shared Mode through the Windows audio server, I recall.

Put another way: any Windows audio device supports WASAPI unless it only ships with an ASIO driver which is unlikely, even in the pro audio space.


try clarett interface. it also comes with pre amps which will make your sound less noisy , scarlet preamps are just absolutely terrible. you can debug your daw to see how it uses drivers and make a python module which exposes similar functions to python. you will likely still want a delay compensation to make things seem free of any latency, but it will be doing _much_ less compensating. maybe theres an opensource daw if you want to skip reversing driver calls from a debugger.


Debugging an existing DAW to see how they do it under the hood is an interesting idea. Haven't done that yet.

About another interface: I do want to keep the application supporting cheaper interfaces such as the Scarlett, because the target audience (hobby musicians) will be using those. Still would be a nice upgrade for me!


Can take a peek at how Tracktion engine does it too


Tracktion looks like an interesting project, thanks for pointing me to this!


I don't know windows audio, but on mac audio that's wildly high latency for a scarlett interface.


I would disable any services and programs running in the background as well. Years ago I disabled the Windows print spooler and it greatly improved jitter. Not sure if that's still the case these days though, that was probably 10 years ago.


So far CPU usage hasn't been an issue at all (<1% usually on my not-very-impressive laptop), which surprised me as well


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: