Hacker Newsnew | past | comments | ask | show | jobs | submit | hyferg's commentslogin

Reproducing this cheap optogenetic rig to control e. coli gene expression using light.

https://www.biorxiv.org/content/10.1101/2022.07.13.499906v1


Hey, we're working on this!

https://pgpod.com/

You have to add articles yourself and it does not have any special logic for HN threads yet.


This is very cool, for sure. But what I'm looking for is something that is super optimized for threaded conversations, like this site. Imagine if each poster were a unique voice, and the format was basically listening to all of the top conversations for a given post, in a way that makes it easy to follow the conversation and back-and-forth from the threads. There would clearly need to be a little bit of design applied and some simplification, but I think that's where some GPT element could come into create a simplified conversation with a condensed amount of speakers.


Interesting, I think the point about processing with GPT is important. Our thing started as naive narration but many articles are hard to follow when simply narrated.


Would be cool to use the username as a seed so that the voice is the same every time for a poster that frequents the auto-cast.


You may learn something from my experience building the same thing: https://www.joshbeckman.org/narro/


Thanks, this is very valuable. I sent an email.


Hey, I'm building something related and would love to get your feedback on it. I can reply with my contact details if you're interested.


We had to restream server events from openai -> our backend -> client. It was pretty simple.


You can return log probs per token generated. This can be used to asses the confidence the model has in handling tasks which involve nominal data.

If that’s not helpful, were you getting at having the model return some rich data about the attention weights that went into generating some token?


For most of our models we return more information. Especially if you look at it from a vendor/customer perspective I believe this to be quite important.


They seem to achieve the 'multimaterial' label by soaking different parts of the polymer in exclusive precursors. If you want to create advanced microelectronics using this method, you would probably want to be able to control gel-differentiation process as part of polymerization.


Whenever I start a project I usually look for some open source equivalent for inspiration even if it's not maintained. We opened up bonsai in case someone else comes along and wants to understand how we did a few things. There are some fun hacks we had to do to make certain things work around overlaying the main window on macOS that might be helpful for other electron apps.


PropelAuth is great and a breeze to integrate! We were dreading building out B2B features like orgs and spending time making login flows for our analytics dashboard product. This was all provided out of the box with Propel.

You can tell Andrew uses it himself from the practical docs and providing nice things like express middleware. I imagine it must be a bit mind bending to dogfood the service.


Appreciate it! PropelAuth's authentication is powered by PropelAuth which is definitely confusing sometimes. I try and think of them as two different services (APIs/hosted pages as one service and an internal configuration service as the other) which helps me.

The plus side is everything we build for ourselves we can release to customers.


This seems like a step in the right direction away from sphere-projected 'immersive' video. Very cool and can't wait for the 6dof VR player to be ready!


Yes, sphere-projected VR/360 video isn't immersive enough given its limitation to 3 DoF. 6DoF video feels so much more immersive.


The motion parallx supported here makes it more natural to consume such content and when you drive the motion yourself (using our Vimmerse player) this takes you to a whole new level. For some content, you really feel that you are present there!


We embedded a web browser and wrote our UI in React for a previous VR project :)


Sounds like a nightmare


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: