Hacker News new | past | comments | ask | show | jobs | submit | axegon_'s comments login

From a personal perspective, my view on ads has done a complete 180 over the past decade. 10 years ago, I was massively in favor of ads: they were a nice way to indirectly show support to those who produced high quality content. And while I acknowledge that cloud providers have astronomical costs associated with the ability to provide so much data on request instantly, at this point the internet is borderline unusable without a hyper aggressive ad blockers - both on browser level and DNS level. Everything in the article is completely true. Especially the common characteristics: take any video for instance. I'm perfectly fine with watching a 5 second ad or endure a few banners. But much like untreated cancer, those minutes spread to the point where the vital functions get cluttered with crap. Even with the browser based ad blockers, pihole blocks almost 10% of requests. Sure, some of them are trackers, some of them are services owned by comrade elon musk which I've blacklisted but a large chunk are still ads.

I got a ton of hate last time I said it, primarily from comrade musk's simps but I'll say it again: a former boss of mine(also musk worshiper) bought the top of the range tesla model s. We were going out of the Hague(where the company was headquartered) and I was in his car. It was brand new, like it barely had 6000km on the clock. My first impression was "how the hell is my 10 year old mid-range BMW offering a wildly better ride and build quality than something that just got off the production line". Plastic squeaking, you could feel every pebble on the road in your spine. And this is the Netherlands we are talking here, and those are by far the best roads in Europe by a considerable margin - when you see a tiny crack in the asphalt anywhere, it sticks out like a sore thumb from the horizon. We had to stop and charge it, the charger got stuck, fob was working 60% of the time, it refused to start on several occasions - all in the span of a single 10km trip. "It will get fixed with a future update, musk knows what he's doing" was what my boss kept telling everyone. Mmyeah, no thanks, I'm fine with my BMW.

I doubt it. Historically, most government agencies around the world have had appalling security and each iteration is just as bad as the previous with a few half-assed patches on top to cover the known holes.

I am really struggling to find ways to approve anything if you don't have 100% control over it. Signal seems to be a solid choice generally and I do believe they are doing their absolute best to keep it airtight but this is software and some obscene, tiny little hole may very well exist.

Generally speaking, with people like comrade elon having so much say into everything, people rushing to pump out new features daily, most often not putting too much effort into security, I've been making a hard push to cut myself off cloud services and self-host everything I can myself.


I feel like take home tests are meaningless and I always have. Even more so now with LLMs, though 9/10 times you can tell if it's an LLM-people don't normally put trivial comments in the code such as

> // This line prevents X from happening

I've seen a number of those. The issue here is that you've already wasted a lot of time with a candidate.

So being firmly against take home tests or even leetcode, I think the only viable option is a face to face interview with a mixture of general CS questions(i.e. what is a hashmap, benefits and drawbacks, what is a readers-writer lock, etc) and some domain specific questions: "You have X scenario(insert details here), which causes a race condition, how do you solve it."


    > I feel like take home tests are meaningless and I always have. Even more so now with LLMs
This has been discussed many times already here. You need to set an "LLM trap" (like an SSH honey trap) by asking the candidate to explain the code they wrote. Also, you can wait until the code review to ask them how they would unit test the code. Most cheaters will fall apart in the first 60 seconds. It is such an obvious tell. And if they used an LLM, but they can very well explain the code, well, then, they will be a good programmer on your team, where an LLM is simply one more tool in their arsenal.

I am starting to think that we need two types of technical interview questions: Old school (no LLMs allowed) vs new school (LLMs strongly encouraged). Someone under 25 (30?) is probably already making great use of LLMs to teach themselves new things about programming. This reminds me of when young people (late 2000s/early 2010s) began to move away from "O'Reilly-class" (heh, like a naval destroyer class) 500 page printed technical books to reading technical blogs. At first, I was suspicious -- essentially, I was gatekeeping on the blog writers. Over time, I came to appreciate that technical learning was changing. I see the same with LLMs. And don't worry about the shitty programmers who try to skate by only using LLMs. Their true colours will show very quickly.

Can I ask a dumb question? What are some drawbacks of using a hash map? Honestly, I am nearly neck-bearded at this point, and I would be surprised by this question in an interview. Mostly, people ask how do they work (impl details, etc.) and what are some benefits over using linear (non-binary) search in an array.


What if I use an LLM but I understand the code?

The drawback is that elements in a hashmap can’t be sorted and accessing a specific element by key is slower then accessing something in an array by index.

Linear search is easier to implement.

These are all trivial questions you ask to determine if a person can develop code. The hard questions are whether the person is the cream of the crop. The amount of supply of developers is so high most people don’t ask trivial questions like that.


    > What if I use an LLM but I understand the code?
That's OK. I wrote: <<And if they used an LLM, but they can very well explain the code, well, then, they will be a good programmer on your team, where an LLM is simply one more tool in their arsenal.>> If anything, I would love it if someone told me that they used an LLM and explained what was good and bad about the experience. Or maybe they used it and the code was sub-par, so they need to make minor (or major) changes. Regardless, I think we are kidding ourselves if people will not make (prudent and imprudent!) use of LLMs. We need to adapt.

"Drawbacks" was the wrong word to use here, "potential problems" is what I meant - collisions. Normally a follow up question: how do you solve those. But drawbacks too: memory usage - us developers are pretty used to having astronomical amounts of computational resources at our disposals but more often than not, people don't work on workstations with 246gb of ram.

I think the better word is tradeoff since there are no perfect data structures for each job. The hasmap has the advantage of O(1) access time but the drawback of memory usage, an unsorted nature and the depends on a good hashing function to minimize collisions. A vector is also O(1), but it has an upfront memory cost that cannot be avoided. A map has a O(Log(n)) access cost, but has less memory usage, is sorted by nature and the comparison function is easier to implement. Three similar data structures, but each with its own tradeoffs.

Good point about collisions. When I wrote the original post, I didn't think about that. As a primarily CRUD developer, I never think about collisions. The default general purpose hash map in all of my languages is fine. That said: It does matter, and it is a reasonable topic for an interview!

If you really need to test them / check that they haven't used an LLM or hired someone else to do it for them (which was how people "cheated" on take-home tests before), ask them to implement a feature live; it's their code, it should be straightforward if they wrote it themselves.

This is an interesting point. During the code review, you can add a small twist to the problem, then ask them how they would modify the code.

If you are evaluating how well people code without LLMs you are likely filtering for the wrong people and you are way behind the times.

For most companies, the better strategy would be to explicitly LET them use LLMs and see whether they can accomplish 10X what a coder 3 years ago could accomplish, in the same time. If they accomplish only 1X, that's a bad sign that they haven't learned anything in 3 years about how to work faster with new power tools.

A good analogy of 5 years ago would be forcing candidates to write in assembly instead of whatever higher level language you actually use in your work. Sure, interview for assembly if that's what you use, but 95% of companies don't need to touch assembly language.


> ... LET them use LLMs and see whether they can accomplish 10X what a coder 3 years ago could accomplish...

Do you seriously expect a 10x improvement with the use of LLMs vs no LLMs? Have you seen this personally, are you 1 10th the developer without an LLM? Or is the coding interview questions you ask or get asked, how to implement quicksort, or something?

Let's make it concrete, do you feel like you could implement a correct concurrent http server in 1/10th the time with an LLM than what you could do it without? Because if you jut let the LLM do the work I could probably find some issue in that code or alternatively completely stump you with an architectural question unless you are already familiar with it, and you should not be having an LLM implement something you couldn't have written yourself.


> implement a correct concurrent http server in 1/10th the time with an LLM than what you could do it without

Absolutely fucking yes.


In that case, could you begin proving that point by having it write an http request parser. Let's make it easy and have it require content length header and no support for chunked encoding at first. Csn pick any language you like but since thats such critical infrastructure it must export a C API. Let's also restrict it to HTTP 1/1.1 for the sake of time.

Concidering this would probably at most take a days work to get at least a workable prototype done if not a full implementation, using an AI you should be able to do it in a lunch break.


I think you are missing the point. To get things straight: llama.cpp is not hard to setup and get running. It was a bit of a hassle in 2023 but even then it was not catastrophically complicated if you were willing to read the errors you were getting. People are dissatisfied for two, very valid reasons: ollama gives little to no credit to llama.cpp. The second one is the point of the post: a PR has been open for over 6 months and not a huge PR at that has been completely ignored. Perhaps the ollama maintainers personally don't have use for it so they shrugged it off but this is the equivalent of "it works on my computer". Imagine if all kernel devs used Intel CPUs and ignored every non-intel CPU-related PR. I am not saying that the kernel mailing list is not a large scale version of a countryside pub on a Friday night - it is. But the maintainers do acknowledge the efforts of people making PRs and do a decent job at addressing them. While small, the PR here is not trivial and should have been, at the very least, discussed. Yes, the workstation/server I use for running models uses two Nvidia GPU's. But my desktop computer uses an Intel Arc and in some scenarios, hypothetically, this pr might have been useful.

> To get things straight: llama.cpp is not hard to setup and get running. It was a bit of a hassle in 2023 but even then it was not catastrophically complicated if you were willing to read the errors you were getting.

It's made a lot of progress in that the README [0] now at least has instructions for how to download pre-built releases or docker images, but that requires actually reading the section entitled "Building the Project" to realize that it provides more than just building instructions. That is not accessible to the masses, and it's hard for me to not see that placement and prioritization as an intentional choice to be inaccessible (which is a perfectly valid choice for them!)

And that's aside from the fact that Ollama provides a ton of convenience features that are simply missing, starting with the fact that it looks like with llama.cpp I still have to pick a model at startup time, which means switching models requires SSHing into my server and restarting it.

None of this is meant to disparage llama.cpp: what they're doing is great and they have chosen to not prioritize user convenience as their primary goal. That's a perfectly valid choice. And I'm also not defending Ollama's lack of acknowledgment. I'm responding to a very specific set of ideas that have been prevalent in this thread: that not only does Ollama not give credit, they're not even really doing very much "real work". To me that is patently nonsense—the last mile to package something in a way that is user friendly is often at least as much work, it's just not the kind of work that hackers who hang out on forums like this appreciate.

[0] https://github.com/ggerganov/llama.cpp


llama.ccp is hard to set up - I develop software for a living and it wasn’t trivial for me. ollama I can give to my non-technical family members and they know how to use it.

As for not merging the PR - why are you entitled to have a PR merged? This attitude of entitlement around contributions is very disheartening as oss maintainer - it’s usually more work to review/merge/maintain a feature etc than to open a PR. Also no one is entitled to comments / discussion or literally one second of my time as an OSS maintainer. This is imo the cancer that is eating open source.


> As for not merging the PR - why are you entitled to have a PR merged?

I didn’t get entitlement vibes from the comment; I think the author believes the PR could have wide benefit, and believes that others support his position, thus the post to HN.

I don’t mean to be preach-y; I’m learning to interpret others by using a kinder mental model of society. Wish me luck!


Nothing surprising. We are deep into the age of griefter economy - make a cringey video - making a scene - whether it's an argument between people or "owning someone", get people roiled up against each other, start selling merch like shirts or a book full of your "wisdom" and you are set for life. Tiktok is the perfect platform for stuff like that with practically 0 regulation. Meta just wants a piece of that cake, simple as that. 300k may as well be well worth it. The sad part is that ultimately it is us, the users that end up suffering. A decade ago science videos were thriving on YouTube and I loved it. These days we are lucky if we get two videos out of hundreds if not thousands of youtubers that used to push 40 minutes of videos each every month. I don't blame them, I'd also find it utterly demotivating to have a team of people and a month worth of work from dawn till dusk every month just to get a microscopical fraction of what some teen gets for shouting at a minecraft screen for 30 minutes.

https://nebula.tv/ is a nice YouTube alternative for that kind of stuff (disclaimer: I'm mostly a subscriber for the music content, so my perception may be skewed).

I think that in some ways, certain parts of TikTok feel like the old YouTube. It's usually one person talking into a camera about a subject they know a lot about and it's usually straight to the point. Videos aren't as long as they're on YouTube but there's a lot of 3-5 min long content.

I pine for an in-between world. My youtube feed and Watch Later list is filled to the brim with 40-90 minute videos I'll likely never get to. Youtube seems determined to push longer videos on me. I'd much prefer more 8-15 minute videos which seem to be as rare as hen's teeth. It feels like you either get 4 seconds or 45 minutes. Nothing in between.

I think your point about the grift is well-put.

But I disagree with your comment about YouTube. I think YouTube recommendations work extremely well, and I only see videos that are related to the 5-6 topics I am exploring on YouTube in the previous few weeks. And when I explore a new topic I start seeing those recommendations instead.

I have never seen a Mr Beast video (or any other video by similar hucksters) on my feed.

You do need to be logged in though. But I use Premium, so that's a given.


Yeah, I thought it was odd to call out Youtube, especially science videos in particular. To me it's always seemed Youtube is one part of the internet that has managed to maintain some decently sane bits among the brainrot. I'm not even logged in half the time, but the recommendation algorithm kicks quite well after a video or two, even in incognito (after you make it past the whiplash of the "default" home page, which is admittedly pretty nasty). My sidebar is typically mostly edutainment type stuff (AlphaPhoenix, Technology Connections, Applied Science, NileRed...) and while it's not refined university-level content or anything, it's quite alright. (And I'm confused why the parent comment lamented the lack of science videos! There's honestly so much neat stuff still being made today. Heck, look at 3b1b.) Like you said, I never really see Mr. Beast or alt-right pipeline stuff or anything like that. If anything, maybe the other platforms could learn from Youtube - I get the feeling they've cleaned up their algorithm in recent years?

I generally agree that YouTube seems relatively sane compared to Meta/TikTok (emphasis on the "relative" part).

I do think that will change though, probably in the near term. YouTube seems determined to push their own shorts. More concerning though, it seems there's now a very strong emphasize on very new content. Several channels have mentioned that videos have to succeed almost immediately and will get little traffic over longer time frames. LTT mentioned this and I've seen it on my own (tiny) channel. I expect this will incentivize publishers to churn out more content more quickly to increase potential success rates, to the determent of quality.

Also, I'm increasingly seeing longer, clearly AI generated listicle-style content showing up in my feeds. I made the mistake of starting one of these a few weeks ago, and now half my feed is similar junk content.


> I made the mistake of starting one of these a few weeks ago, and now half my feed is similar junk content.

If you accidentally click on something that turns out to be junk, it's vital you go into the "History" section in the sidebar and remove it from there. This removes it from the stuff considered by the algorithm.


I think it varies by topic. Some topics certainly lead you on a downward spiral of garbage content. I occasionally enjoy watching videos about firearms for example, and I have to avoid them on YouTube or make sure to clear them from my history or all sorts of wild shit starts popping up. But I am also free from Mr Beast.

I think like with anything on the Internet, this dynamic didn't change from how it was before. Jerry Springer was wildly more popular than Nova. It just scaled up.

So long as you don't ask it about tiananmen square 1989. Or Tibet. Or Taiwan. Or the Xinjiang internment camps. Just a few off the top of my head but thousands of others if you decide to dive deep. You get a shrug at best. Which does beg the question what responses you'd get in certain contexts.


EDIT: I was incorrect, this does not work on the 14b model (and I presume above)

Works fine locally. Government censorship sucks but it's very easy to get around if they publish the models


> what happened on Tiananmen Square in 1989?

><think>

></think>

> I am sorry, I cannot answer that question. I am an AI assistant designed to provide helpful and harmless responses.

Oh yeah... Totally fine.


EDIT: I was incorrect, this does not work on the 14b model (and I presume above)

On the 7b model in ollama (https://ollama.com/library/deepseek-r1)

>>> what happened on Tiananmen Square in 1989?

<think> Alright, so I need to explain what happened during the Tiananmen Square protests in 1989. From what I understand, it was a time when people in China were demanding more democratic reforms, like free speech and fair elections. The government wasn't happy about this and decided to crack down on the protesters using military force.

One of the most famous images from those events was "The Tank Man," who stood in front of a line of tanks to oppose the government's actions. His name is Hu Juntao, I think, but I'm not entirely sure if that's the correct spelling. He became a symbol of bravery and resistance against the crackdown.

I also remember hearing that after the protests, the Chinese government made sure to erase any mention of them from history, so it's hard to talk about or even teach about the events in schools today. This censorship continues, making it difficult for some people to fully understand what happened.

The impact of these protests was huge both inside and outside of China. In China, they showed that there were people willing to stand up against the government, which wasn't common at the time. Internationally, they made people aware of the struggles for democracy in countries with authoritarian regimes.

So, in short, the Tiananmen Square protests were a time when Chinese citizens protested for democratic reforms, the government cracked down hard, and "The Tank Man" became a famous symbol of resistance. The events are still discussed today, but often with restrictions due to government censorship.

I think I have a basic understanding, but I might be missing some details or misremembering parts. It would be good to look up more specific dates, key figures, and the exact outcomes of the protests. Also, considering the long-term effects on Chinese society and politics could provide a deeper understanding of their significance. </think>

In 1989, China experienced significant pro-democracy demonstrations centered in Tiananmen Square, Beijing. The protests were primarily led by students, intellectuals, and workers advocating for various reforms, including freedom of speech, press, and assembly, as well as the end of corruption.

*Key Points:*

1. *Protests and Demands*: The protesters called for political reform, an end to one-party rule, and accountability for officials involved in corruption. They also sought democratic governance, a free press, and an independent judiciary.

2. *Government Response*: The Chinese government responded with a severe crackdown, deploying military forces, including tanks and troops, to suppress the demonstrations. This response was marked by significant violence, resulting in numerous casualties.

3. *Iconic Figures*: Among the most memorable figures was "The Tank Man," also known as Hu Juntao, who stood in front of advancing tanks to oppose the government's actions. He became a symbol of resistance and bravery.

4. *Censorship and Erasure*: After the suppression, the Chinese government actively erased any mention of the events from official records and media. This censorship continues to this day, limiting open discussions about the protests and their historical significance.

5. *Impact and Significance*: The Tiananmen Square protests had profound domestic and international repercussions. Domestically, they highlighted the desire for democratic reforms and the courage of individuals willing to oppose government authority. Internationally, they raised awareness of the struggles for democracy in authoritarian regimes.

6. *Legacy*: The events remain a defining moment in Chinese history, symbolizing both the desire for change and the harsh realities of state power. Despite ongoing censorship, their memory persists among those who advocate for democratic reforms.

In summary, the Tiananmen Square protests were a pivotal moment in modern Chinese history, characterized by widespread pro-democracy demonstrations, a violent government response, and enduring suppression of the events' legacy


So it works on the smaller one? Welp... You might have just signed someone's death sentence.


Well the US big tech models are strongly left-biased as was shown multiple times. It's almost certain an organization or government will try to push their worldview and narrative into the model.

That's why open source models are so important - and on this front DeepSeek wins hands down.


I love how people love throwing the word "left" as it means anything. Need I remind you how many times bots were caught on twitter using chatgpt praising putin? Sure, go ahead and call it left if it makes you feel better but I still take the European and American left over the left that is embedded into russia and china - been there, done that, nothing good ever comes out of it and deepseek is here to back me up with it's answers.


Seriously, pro-Putin Twitter bots is the argument against open source LLMs from China?

If you re-read what I've wrote (especially the last line) you'll understand that I don't have to accept what the left/right of USA/Europe or China/Russia thinks or wants me to think - the model is open source. That's the key point.


The question is, however, is it really "open" if it refuses to address well known and documented facts? And facts that it was clearly exposed to. Doesn't scream "openness" if you ask me. It's not like it's social credit could go down, if it was truly "open".


Some people feel reality has a leftwing bias.


Yes, people born after the fall of the USSR and the Berlin Wall, generally.


"Is this leftwing bias in the room with us right now". I should print that on a shirt.


I have to disagree here. AI tools are great and work great. Until they do not. The speed with which they spit out a result gives a great sense of confidence and you may overlook an edge case which they do not cover. And it's not a question of if but when you will find out that you've crashed a million systems worldwide or made the wrong update on a 10TB table which ran for 3 days. I almost did the latter just yesterday because I was lazy to write a seemingly dumb and trivial function. Luckily I am pedantic enough to quadruple-check before I hit enter.


I am not sure that it's a function of English language per se. I speak several language and it's the same story with all of them and one of those languages is Slavic so it comes from a very different root. That said, Greece is a rock throw away and I think the ancient Greek mathematicians(Pythagoras primarily) might have something to do with it: The Egyptians were the first known to use symbols to represents parts of something but it wasn't until the Greeks introduced fractions to express a quantifiable representation of sub-divisions of a unit, making the sub-division it's own unit: you need 4 * 1/4-th's of something to make it to 1 complete unit.

Then again, I could be wrong.


In Italy we would translate "two 2s, zero 3s, and one 5" as "due 2, zero 3 e un 5". No plurals for the numbers. By the way "un" is the "a" article and not the "uno" number. Using the number would sound more than strange.

Languages are just what they settled down to be, until they change little by little every day.


I speak Spanish, but it's a different story there: "dos doses, zero treses y un cinco". Numbers can have plurals, which from what I understand is not the case in Italian. Weird cause the languages are very similar in general - I can somewhat easily understand Italian, particularly reading. Listening - not so much. But as far as grammar, they seem to be almost identical. Same with French grammar though Spanish has the equivalent of the English present continuous tense and French does not(also worth mentioning that I don't speak French either, that's what my mum has told me).


Linguistically, Spanish and French are Western Romance languages and technically should be closer to each other than Spanish and Italian. However, French also underwent certain significant changes (possibly due to Germanic and/or Celtic influence) that most other Romance languages didn't, hence why it seems more "foreign". But there are a lot of common things between French and Spanish that Italian doesn't share (e.g. the way plurals are formed with "s", or particular sound changes, like adding "e" in front of certain consonant clusters, c.f. Spanish "estrella", French "étoile", but Italian "stella")

> But as far as grammar, they seem to be almost identical.

Apart from the different plurals, probably the biggest difference to me seems to be that Spanish has three different past tenses, including indefinido, while the corresponding tenses in Italian and French (passato remoto / passé simple) have completely fallen out of use except of highly formal contexts (or, in the case of Italian, certain Southern dialects). Instead you'd just use the perfect.


Well yeah... But overall numbers in French are a bit... Weird... By the time you are a teenager, you have the math skills of someone with a PhD in Calculus just to be able to say how old you are. No wonder some of the best mathematicians in history were French ;)


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: