Hacker News new | past | comments | ask | show | jobs | submit | kubiton's comments login

I got my HTC vive.out of the closet today and as usual hate USB micro and love USB c.

Usb-c just fits.

Do you mean the real USB plug the bigger one? Those are dependent on my device. The USB a port on my desktop case are okay, the ones at the back of my Mainboard way to tight.


What if we are just the result of a ml network with a model of the world?


We're not.


Another one not understanding the power of LLMs

The magic is not that it can tell you thinks but that it understands you with a very high probability.

It's the perfect interface for expert systems.

It's very good in rewriting texts for me.

It's very good in telling me what a text is about.

And it's easy enough to combine a LLM with expert systems through apis.

I for example mix languages when talking to chatgpt just because it doesn't matter.

And yes it's often right enough and for GitHub copilot for example it doesn't matter at all if it's always right or only 80%.

It only has to be better than not having it and 20 bucks a month.


> it doesn't matter at all if it's always right or only 80%

People get fired every day for pasting code that is 80% right and worked on a couple test-cases.


And yet, nobody has regulated stackoverflow...


Likely because human generation of content is more expensive and doesn't scale as far?

Though SO has a lot of moderation, so it's somewhat self regulated.


What?

Never seen this happening. In contrary still not every team is doing code review and plenty of people regularly fix bugs in production.

One ex colleague invalidated all apple device certificates, didn't get fired.

A previous tech lead wrote code which deleted customer data and we found that a half year later, no one was fired.

And no one got fired at a code review.


Get a summary of a contract or law wrong, lose a few million dollars... it really is, sadly again, another peace of AI only for low risk applications.


You make it out as all/most things we do is 'high risk'.

And I clearly showed an example how LLM is more an interface than a answering machine.

If a LLM understands the basics of law it is by sure much better than a lot of paralegals of transforming the info into search queries for a fact database.

And I'm pretty sure there are plenty of mistakes in existing law activities


No, most/a lot of activities are low risk, but my point is we seem to struggle with AI in high risk while we automate human flaws.

Also, LLMs don't understand other than via the language representation.


Either LLM / ai becomes perfect or we will start writing code/frameworks which makes it much easier for LLM/ai to use them.


I'm mind blown because it's already so good and I do imagine that it will get as good as you need it to become.

And yeah as you said with domain knowledge it works very well


What? Why?


Thirty years ago it was also believed that you could just cure gayness.

And 70 years ago woman were not allowed to vote.

It's so common that even today you hear people defending rapists because either it wasn't rape or a family member or it would be shameful to the family to have a dirty person.

While your one sided analysis might also have some merits the past was not known at all to be full of facets.

This is an achievement of modern times thanks to more global and faster communication through the internet and also denser population.

And yes it is a real problem when everyone has a smartphone and uses it constantly. The answer? Time budgeting build into our phones.


You can use an LLM as an interface only.

Works very well when using a vector db and apis as you can easily send context/rbac stuff to it.

I mentioned it before but I'm not impressed that much from LLM as a form of knowledge database but much more as an interface.

The term os was used here a few days back and I like that too.

I actually used chatgpt just an hour ago and interesting enough it converted my query into a bing search and responded coherent with the right information.

This worked tremendously well, I'm not even sure why it did this. I asked specifically about an open source project and prev it just knew the API spec and docs.


Ml training for example.

"Generate cartoon in style of"


I don't know why, but my knee-jerk reaction is that this is the worst use of the material.

Why is that? Anyone else with that opinion able to put into words the reason? Because I'm not really sure why I think that, but I do.


No clue what your problem is with it.

Old material always has some use an we do live in a timeless time now.

Reusing the lid and getting inspired by it is part of humans creativity


you're definitely not winning me over with this


I watch him for a long time and he is just right.

The progress on so many fields is fast.

And a ton is already available to use.

All the nerfs, most of Nvidia topics, etc.

Or all the character animation stuff, those things are in games and in the industry.

And even the few which show the direction make it very obvious were the road is going in the next 5-10 years.

I'm honestly surprised that you think he is naive/to hyped.

Alone the Nvidia ml denoiser and real time ray tracing is basically redefining graphs. We are in the middle of all of it.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: