Hacker News new | past | comments | ask | show | jobs | submit | matyaskzs's comments login

This sounds like a nightmare and very questionable in any serious work capacity. You are paying to breach your nda...


Meta: Sigh, miss click - deleted the wrong comment, so here it goes again.

Screen sharing is optional - it's up to the user to decide whether they want to use it.

Not everyone works under NDA. Some of our users are medicine/law students, entrepreneurs, life coaches, history professors etc.


Elon Musk publicly expressed his willingness to engage in a physical fight with Mark Zuckerberg, the CEO of Meta, stating he is ready to fight 'any place, any time, and under any rules.' This statement was made during Musk's visit to Capitol Hill, where he attended a speech by Israel's Prime Minister Benjamin Netanyahu. The challenge has sparked widespread interest and speculation among social media users and the public.


Brainstem hacking should be a regular term.


I am sure one of those companies will offer you a job. This is peak data science.


Thank you :) Made it as a time saver for "research purposes" though.


Cloud cannot be beaten on compute / price, but moving to local could solve privacy issues and the world needs a second amendment for compute anyway.


> Cloud cannot be beaten on compute / price

Sorry, I can't let misinformation like that slide.

Cloud cost/benefit ratio is not good in many circumstances.

For hobbyists it works well because you run your job for very brief periods and renting is much cheaper than buying in those cases. Similarly, if your business usage is so low as to be effectively run once per day then cloud has major benefits.

However, if you are doing any kind of work that consumes more than 8hrs of computer time in a day, cloud is going to start being much more expensive.

The exact cost/benefit depends on the SKU and I'm mostly talking about CPU/Memory/Storage- for managed services like databases it's significantly worse, and I'm comparing to rented servers not self-hosting at home, which is significantly cheaper still.

Local hardware has downsides (availability, inflexibility), but it's faster and cheaper in almost all real workload scenarios where the compute would otherwise be completely idle/turned off >90% of the time.


I should have phrased it better. If you rent cloud compute from a big provider you will probably end up paying more than if you ran that same compute, but then the actual cost of that same compute in the cloud is going to be lower when you add in economies of scale. They will get a cheaper deal on hardware, electricity and on almost anything you would need.

On the lower end, you can't beat a cheap hetzner vps for price, reliability and compute if you ran it 24/7.


You can beat gpt4/claude in terms of price/performance for most things by a mile using fine tuned models running in a colo. Those extra parameters give the chatbots the ability to understand malformed input and to provide off the cuff answers about almost anything, but small models can be just as smart about limited domains.


The problem is that once you say “fine tuned” then you have immediately slashed the user base down to virtually nothing. You need to fine-tune per-task and usually per-user (or org). There is no good way to scale that.

Apple can fine-tune a local LLM to respond to a catalog of common interactions and requests but it’s hard to see anyone else deploying fine-tuned models for non-technical audiences or even for their own purposes when most of their needs are one-off and not recurring cases of the same thing.


Not necessarily, you can fine tune on a general domain of knowledge (people already do this and open source the results) then use on device RAG to give it specific knowledge in the domain.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: