If you run a pentest, allowing rooted devices will almost certainly show up as a vulnerability. It'll be marked "low risk", but you'll also be told that you don't want to "accept risk" for too many "low risk" vulnerabilities.
So somebody then needs to say that this is not something they worry about rather than doing the easy thing and remediating it.
ScribbleVet (https://scribblevet.com) | Full-stack, former technical founders | Full-time | Remote (must be able to work US timezones)
ScribbleVet is an AI-powered scribe for veterinarians. We save veterinarians two hours every day, reduce veterinary burnout and help animals get better care.
* We're generating real ($XM/yr) revenue
* Our users love the product
* We're fully remote but not asynchronous, so we have a lot of autonomy but it's easy to collaborate with someone when needed.
We're looking for:
* Senior engineers (former founders or early startup experience strongly preferred) who can work across the stack.
* Engineers with a product mindset. Customer-centricity or design skills are a plus.
We’ve found that the engineers who are happy and successful at Scribble really enjoy technology (learning new frameworks, experimenting with cutting-edge platforms) and are very product-oriented.
ScribbleVet (https://scribblevet.com) | Designer (marketing & product) | Full-time | Remote (must be able to work US timezones)
ScribbleVet is an AI-powered scribe for veterinarians. We save veterinarians two hours every day, reduce veterinary burnout and help animals get better care.
- We're generating real ($XM/yr) revenue
- Our users love the product
- We're fully remote but not asynchronous, so we have a lot of autonomy but it's easy to collaborate with someone when needed.
We're looking for:
- A designer to join our team (full-time) to work across product design (UX) and marketing assets (landing pages, trade show materials etc)
- Strong product skills and founding, early employee, or building something from scratch experience required
If you're interested and think you might be a fit, please contact me at rohan (at) scribblevet.com!
You'll probably find this talk [1] interesting. They control all the training data for small LLMs and then perform experiments (including reasoning experiments).
ScribbleVet (https://scribblevet.com) | Full-stack (frontend emphasis), former technical founders, AI engineers | Full-time | Remote
(must be able to work US timezones)
ScribbleVet is an AI-powered scribe for veterinarians. We save veterinarians two hours every day, reduce veterinary burnout and help animals get better care.
- We're generating real ($XM/yr) revenue
- Our users love the product
- We're fully remote but not asynchronous, so we have a lot of autonomy but it's easy to collaborate with someone when needed.
We're looking for:
- Senior engineers (former founders or early startup experience preferred) who can work across the stack and are AI curious
- Engineers with a product mindset. Customer-centricity or design skills are a plus.
Not OP but check out Closer to Truth on YouTube. PBS show hosted by a former neuroscience PhD, they have tons of recent interviews with leading thinkers on consciousness (among other fascinating topics).
Is Modal a good solution for running fine-tuned LLMs and Whisper models? If the cold-start time is low we're more than willing to modify our code to use Modal's infra.
Happy to follow up via email but didn't see one in your profile.
It requires API access, but once you have access you can easily play around with it in the openai playground.
Setting temperature to 0 makes the output deterministic, though in my experiments it's still highly sensitive to the inputs. What I mean by that is while yes, for the exact same input you get the exact same output, it's also true that you can change one or two words (that may not change the meaning in any way) and get a different output.
I think you can go far quite cheaply. Get your code working on smaller/toy models, and then when you want to test it on larger ones you can ship it over to a machine at one of the cheaper providers (vast.ai/jarvislabs etc) to give it a run before pausing/killing the machine.
I've been porting Stable Diffusion (which isn't a small model) over to Elixir and as part of doing that have been starting/stopping my jarvislabs machine when I start/stop building. I've been spending about $1/day without trying to be efficient.
Also, fast.ai is a great resource for learning ML, I highly recommend it.
So somebody then needs to say that this is not something they worry about rather than doing the easy thing and remediating it.