I wasn't unemployable as a graduate, I found a job after all. But I was near enough useless and started from the ground up.
I've always felt my real education in software engineering started at work.
20 odd years later I lead a large engineering team and see the same with a lot of graduates we hire. There's a few exceptions but most are as clueless as I was at that age.
Yeah, I graduated around 2000 and had to learn how to work on a professional software engineering team.
That doesn't mean my education was worthless—quite the opposite. It's just that what you learn in a software engineering degree isn't "how to write code and do software development in a professional team in their specific programming language and libraries and frameworks and using their specific tooling and their office politics."
Even professional software development as you know it may not stick around for very long. I believe knowledge and experience gained along the way can translate across different context and you can use that to pivot and adapt to the environment.
I dunno, microcontrollers could help in this zombie apocalypse world we're imagining.
I would think even in small scale having things like a 3d printers, CNC machines, networked video surveillance and alarm systems, solar arrays etc would be very beneficial.
Absolutely though the top priorities would be much simpler things though like food, clean water and shelter.
Process control seems like the big one. Precise tools for precise measurements and monitoring are pretty high up the tech tree, so if you can say "we saved a box of Arduinos and sensors from the Before Times", you can get those capabilities back sooner, and potentially use them as references for tools built with more renewable resources.
I second this, it's a great fun exercise parsing various binary file formats. At various times through my career I started with simple stuff like BMP and worked up to packetized media streams.
On the way learned a lot about bit manipulation and reading ISO type specifications.
The main problem with those is the memory controller starts to degrade a lot when you use all 128 of those cores.
We were doing some testing and > 96 cores at 100% caused a massive degradation in performance. We ended up going with dual 32C/64T Epycs (which cost twice as much) as a result. If they fix it in the Altra One chips they will be back on the table though because they were very good power wise for our workload and quite price competitive in a supermicro chassis.
They are relatively popular legal pets in Europe - though not sold in many mainstream pet shops. 100% of those kept as pets here are bred in captivity.
Having kept them before, they are genuinely about as hard to care for as goldfish but need bigger tanks and a little bit more cleaning.
Also super easy to breed, we let the spawn hatch once and ended up with about 70 larvae, they cannibalize quickly but 6 grew to full size and we sold them on very easily.
Also note most goldfish are abused, they need huge tanks 40G+ if I remember right, and not the 8-16 fl oz bowls they're stereotypically kept in. And they need filtration and would do better with a water heater. So idk if pointing to goldfish is the best indicator, even if imo you're technically correct - the refrigerating system isn't really a lot to maintain and that's the main difference.
To add to this, ideal temperature is 16C to 18C (we use a Pi to keep a running graph of the temperature). Lower is generally ok, apparently if it gets into the mid-20's they get so stress they try and escape the tank. We have a chiller, though we have also used the fairly common trick of putting ice bottles in the tank as needed (milk containers filled with water and left in the freeze for 24 hours work well).
Indoor water temp was fine for us, maybe depends a lot on the climate where you live, cool is easy here!
Hopefully we kept them well. We had them for years and in that time they spawned regularly, ended up giving our original pair up for adoption when we moved house.
Tank was smaller than 40g. You're right though bigger is better, that's the general advice with all pets (including goldfish which can grow huge!).
For inference the bandwidth is generally not parallelized because the weights need to go through the model layer by layer. The most common model splitting method is done by assigning each GPU a subset of the LLM layers and it doesn't take much bandwidth to send model weights via PCIE to the next GPU.
My understanding is that the GPU must still load its assigned layer from VRAM into registers and L2 cache for every token, because those aren’t large enough to hold a significant portion. So naively, for a 24GB layer, you‘d need to move up to 24GB for every token.
I don't know if go counts as "systems programming" like the other commenter mentions.
But I have been recently using it for some tooling and small servers on personal projects where I'd have used python before.
Frankly it's been a joy and I wish I'd started earlier. The concurrency primitives are great, and the static binaries make deployment easy (raspberry pi in this case).
Struggle to use anything other than python professionally, the need to settle on a common denominator trumps pretty much everything else.
Thinking a bit further ahead, what does the world look like in 30-40 years when a generation has been accustomed to this type of interaction from birth.
Feels like trying to imagine the societal impacts of the internet in the early 90s.
Would be cool if we could finally kill off false information. Not that I trust big tech to do so, but at least the possibility for the most trusted entity in a persons life to be strongly grounded in reality is there.
This is an interesting take, and I'd guess that the training data for this probably did use podcasts as a source.
Getting very realistic / real world conversational training data for an ai would be hard. Only a subset of us appear on podcasts, radio or tv and probably all speak in a slightly artificial manner when we do.
When I commented on the unnatural cadence, it told me that it had been trained on podcasts, which does help explain the issue - some people tend to “live-edit” themselves when a conversation is being recorded, which leads to this staccato. It seems they need to find a better source of training date for more natural conversational speech.
I agree, I thinks it's probably very easy to find billions of hours of conversation on YouTube, but non of it is set to training data with a good transcript.
Yep! it's public dialogue, intended for an audience with a prepared topic, etc. Or it's actors imitating private dialogue, but again shaping it towards an audience.
AI agents like this are trying to recreate personal intimacy I guess, which does feel like it might be different somehow.
I've always felt my real education in software engineering started at work.
20 odd years later I lead a large engineering team and see the same with a lot of graduates we hire. There's a few exceptions but most are as clueless as I was at that age.