Hacker News new | past | comments | ask | show | jobs | submit login

Very interesting concept and neat visuals. Before I dive in, I'm a trained vocalist and breathing is core concept #1. My feedback may be geared towards our line of work, but I believe breathing mastery is analogous across domains. Here are my initial thoughts:

Using the microphone and camera makes sense. When the camera required calibration, my excitement grew. Then my anticipation gave way to disappointment when the bar had no sensitivity to silent breathing, nor 'shoulder breathing.' I'm now struggling to know what the camera is needed for - ideally you want to see if someone is shrugging their shoulders instead of filling their diaphragm, but the camera positioning isn't suited for that - nor did my experimentation reveal the app's sensitivity to that input.

Further, while one can 'cancel out' the audio stimulus from the audio input in software, I wonder if there's interference. I found it takes a significantly 'loud' breathing to get the bar to move along with me - and even still, the bar shifts before I'm ready to shift. In some sense, I guess that's the intended behavior change - however, not all loud breathing is good breathing and leaving us without feedback that we should change our breath tempo doesn't help us get better.

This overall is a wonderful idea - and would be perfectly fine (really better in my opinion) without the request for camera and microphone access.




Thanks, that's very useful feedback. At the moment, the camera isn't utilised to its full potential - it is used to guide positioning rather than measuring accessory muscles or shoulder movement. I think overall adding sensors to the experience does add something, and hopefully with more development the app could alert the user to things like hyperventilation and dysfunctional breathing patterns.


You could use the front TrueDepth sensor that is behind FaceID unlocking. You will get quite detailed RGB + depth data. It could possibly capture even the subtle movements of upper chest.

see Steffen Urban, Thomas Lindemeier, David Dobbelstein, Matthias Haenel, "On the Issues of TrueDepth Sensor Data for Computer Vision Tasks Across Different iPad Generations" and Andreas Breitbarth, Timothy Schardt, Cosima Kind, Julia Brinkmann, Paul-Gerald Dittrich, Gunther Notni, "Measurement accuracy and dependence on external influences of the iPhone X TrueDepth sensor".


Could you use the LiDAR data on newer iPhones to get more accurate assessment? https://developer.apple.com/documentation/avfoundation/addit...


Hmm it currently uses TrueDepth on devices that support it. I think LIDAR is only with back-facing cameras?


Ah good point that kind of defeats the visual part of your app :) nice work btw!


I just bought it, the microphone doesn’t seem to do a lot (and I don’t gave access to camera) but I will try it out later.

It’s a well thought app. The core function is really good, I will enjoy trying out the different exercises! The design of the homescreen could be better, looks a bit cluttered.

Maybe some background information if it takes off in the future would be nice for people who don’t know the science behind each breath type.

Other than that, I wish you success!


It's not fair to judge the microphone when you didn't use the camera to position it correctly.

There is an exact distance and tilt the phone needs to get the best sound, and its not really possible to nail that without the help of the camera.


Have a quick look here if you think the mic input is not working correctly: https://www.lungy.app/how-to. If that doesn't help, please send a message to - hello [at] lungy dot app

I agree the home screen could be clearer and the layout could be improved on the exercises to include more info. Thank you for the feedback!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: