Hacker Newsnew | past | comments | ask | show | jobs | submit | jstrebel's commentslogin

Lefties sympathizing with criminals, sharing their wealth distribution fantasies, agitating against competing political views. You've come a long way, CCC! The initial ideas was political, but with a clear focus on freedom of information, and the power to govern your own personal data.

In all fairness: human senior devs see AI-written source code with some disdain, as it usually does not match their stylistic and idiomatic preferences (although being correct and fully working). I don't think that untested code is the problem here - you can easily measure test coverage and of course. every CI/CD pipeline should run the existing unit and integration tests.

I am certain that LLMs can help you with judgment calls as well. I spent the last month tinkering with spec-driven development of a new Web app and I must say, the LLM was very helpful in identifying design issues in my requirements document and actively suggested sensible improvements. I did not agree to all of them, but the conversation around high-level technical design decisions was very interesting and fruitful (e.g. cache use, architectural patterns, trade-offs between speed and higher level of abstraction).


The publicly funded media (radio, TV) obviously use this finding to claim that they need more money and/or a tighter regulation of AI companies' products. Sounds a bit self-serving to me...


If you want to do research, usually the first thing to do is refine your research question up to a point where it becomes relatable to the scientific state of the art and where it becomes clear how to test / evaluate it. I don't think you are there yet.


Yeah not sure how the process you described works. Im just a creative thinker that likes to extrapolate ideas. I figured people on hackernews might be interested in this subject and could point me toward some books to read on the subject.


I think this "argument" has always been flawed. I don't need to justify what information I would like to share especially with state agencies. In Germany, this is even encoded in a legal principle called "Informationelle Selbstbestimmung" (informational agency). It's not about the information, it's about my right to decide about sharing it.


Impressive setup, but I would assume it to be very operations-intensive because of the high number of deployed components and their complex configuration. Plus, if you are serious about self-hosting, you would need the facilities and infrastructure to deploy it: server rack, redundant power supply, smoke detectors, fire extinguisher... I would never let my PC-grade hardware run unsupervised in my home. And if I understood correctly, you would still have to have some server on the Internet for running your Headscale VPN, so you need your own dedicated Internet connection - ADSL, dial-up, cable modem would not be enough.


I absolutely love this paper and it's a shame that this research does not receive more attention. Everybody is raving about LLMs, but also everybody is ignoring the shaky foundations on which they are built (just think of training data poisoning). It is also a shame that there are no real software applications to my knowledge that really implement the iconic and categorical representations and try to build an AI system around it.


Purely symbolic AI has been tried and found wanting. Decades of research by hundreds of extremely bright people explored a large number of promising-looking approaches to no avail. Intuition tells us thinking is symbolic; the failure of symbolic systems tells us intution is most likely wrong.

What is interesting about current LLM-based systems is that they follow exactly the model suggested by this paper, by bolting together neural systems with symbol manipulation systems - to quote the paper "connectionism can be seen as a complementary component in a hybrid nonsymbolic/symbolic model of the mind, rather than a rival to purely symbolic modeling."

They are clearly also kludges. As you say, they are built on shaky foundations. But the success - at least compared to anything that has gone before - of kludged-together neural/symbolic systems suggests that the approach is more fertile than any that has gone before. They are also still far, far away from the AGI that has been predicted by their most enthusistic proponents.

My best guess is that future successful hard-problem-solving systems will combine neurosymbolic processing with formal theorem provers, where the neurosymbolic layer constructs proposals with candidate proofs, to submit to symbolic provers to test for success.


I think there is a misunderstanding - the whole point of my comment was that LLMs are lacking sensory input which could link the neural activations to real-world objects and thus provide a grounding of their computations.

I agree with you that purely symbolic AI systems had severe limitations (just think of those expert systems of the past), but the direction must not only go towards higher-level symbolic provers but also towards lower level sensory data integration.


I played it yesterday, and IMHO, the visual appearance looks a bit inconsistent. On the one hand, you have the satellite, high-detail top-down landscape view and on the other hand, you have the very basic, geometric, small and uni-color shape of the boat. I would try to reduce the level of detail of the environment, so the overall scene gets easier to observe /understand visually. Can you zoom in on the boat a bit?


Well that's right, but only the currently elected parliament is allowed to stay during war times, not the president. So you cannot have elections, but the president cannot simply stay in power without them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: