There is no "coding FPGA". Verilog/VHDL are not programming languages, they are "hardware description languages" (HDL). That means when you write those languages literally what you're doing is specifying every single wire in the circuit. People that do FPGA work aren't even called programmers, they're called designers.
> High-Level Synthesis
Something that absolutely not a single professional designer uses.
I agree that the sentiment around HLS is very bad, but there aren’t many good examples online, showcasing the difference. Do you think HDL vs HLS tradeoffs will be visible on something like 8-bit unsigned integer 3x3x3 or 4x4x4 matrix-multiplication?
I've started implementing this, but organizing a good testing environment was time-consuming. If someone here has a sandbox and the right skill set, 3x3x3 or 4x4x4 matrix multiplications can be a perfect candidate for FPGA ports and case studies!
Showing the tradeoffs between complexity & throughput of HLS and HDL implementations is the goal. I assume, uint8 should be sufficient to make a point?
Would we even recognise it if it arrived? We'd recognise human level intelligence, probably, but that's specialised. What would general intelligence even look like.
If/when we will have AGI, we will likely have something fundamentally superhuman very soon after, and that will be very recognizable.
This is the idea of "hard takeoff" -- because the way we can scale computation, there will only ever be a very short time when the AI will be roughly human-level. Even if there are no fundamental breakthroughs, the very least silicon can be ran much faster than meat, and instead of compensating narrower width execution speed like current AI systems do (no AI datacenter is even close to the width of a human brain), you can just spend the money to make your AI system 2x wider and run it at 2x the speed. What would a good engineer (or, a good team of engineers) be able to accomplish if they could have 10 times the workdays in a week that everyone else has?
This is often conflated with the idea that AGI is very imminent. I don't think we are particularly close to that yet. But I do think that if we ever get there, things will get very weird very quickly.
Would AGI be recognisable to us? When a human pushes over an anthill, what do the ants think happened? Do they even know the anthill is gone; did they have concept of the anthill as a huge edifice, or did they only know earth to squeeze through and some biological instinct.
If general intelligence arrived and did whatever general intelligence would do, would we even see it? Or would there just be things that happened that we just can't comprehend?
It's ten times the time to work on a problem. Taking a bunch of speed does not make your brain work faster, it just messes with your attention system.
> Though I don't know what you mean by "width of a human brain".
A human brain contains ~86 billion neurons connected to each other through ~100 trillion synapses. All of these parts work genuinely in parallel, all working together at the same time to produce results.
When an AI model is being ran on a GPU, a single ALU can do the work analogous of a neuron activation much faster than a real neuron. But a GPU does not have 86 billion ALUs, it only has ~<20k. It "simulates" a much wider, parallel processing system by streaming in weights and activations and doing them 20k at a time. Large AI datacenters have built systems with many GPUs working in parallel on a single model, but they are still a tiny fraction of the true width of the brain, and can not reach anywhere near the same amount of neuron activations/second that a brain can.
If/when we have a model that can actually do complex reasoning tasks such as programming and designing new computers as well as a human can, with no human helping to prompt it, we can just scale it out to give it more hours per day to work, all the way until every neuron has a real computing element to run it. The difference in experience for such a system for running "narrow" vs running "wide" is just that the wall clock runs slower when you are running wide. That is, you have more hours per day to work on things.
That's what I was trying to express, though: if "the wall clock runs slower", that's less useful than it sounds, because all you have to interact with is yourself.
I exaggerate somewhat. You could interact with databases and computers (if you can bear the lag and compile times). You could produce a lot of work, and test it in any internal way that you can think of. But you can't do outside world stuff. You can't make reality run faster to keep up with your speedy brain.
Possibly. Here we imagine a world of artificial people - well, a community, depending how many of these people it's feasible to maintain - all thinking very fast and communicating in some super-low-latency way. (Do we revive dial-up? Or maybe they all live in the same building?) And they presumably have bodies, at least one each. But how fast can they do things with their bodies? Physics becomes another bottleneck. They'd need lots of entertainment to keep them in a good mood while they wait for just about any real-world process to complete.
I still contend that it would be a somewhat mediocre super power.
Mustafa Suleyman says AGI is when a (single) machine can perform every cognitive task better than the best humans. That is significantly different from OpenAIs definition (...when we make enough $$$$$, it's AGI).
Suleyman's book "The Coming Wave" talks about Artificial Capable Intelligence (ACI) - between today's LLMs (== "AI" now) and AGI. AI systems capable of handling a lot of complex tasks across various domains, yet not being fully general. Suleyman argues that ACI is here (2025) and will have huge implications for society. These systems could manage businesses, generate digital content, and even operate core government services -- as is happening on a small scale today.
He also opines that these ACIs give us plenty of frontier to be mined for amazing solutions. I agree, what we have already has not been tapped-out.
His definition, to me, is early ASI. If a program is better than the best humans, then we ask it how to improve itself. That's what ASI is.
The clearest thinker alive today on how to get to AGI is, I think, Yann LeCun. He said, paraphrasing: If you want to build an AGI, do NOT work on LLMs!
Good advice; and go (re-?) read Minsky's "Society of Mind".
I'd accept that as a human kind of intelligence, but I'm really hoping that AGI would be a bit more general. That clever human thinking would be a subset of what it could do.
You could ask Gemini 2.5 to do that today and it's well within its capabilities, just as long as you also let it write and run unit tests, as a human developer would.
AGI isn't ASI; it's not supposed to be smarter than humans. The people who say AGI is far away are unscientific woo-mongers, because they never give a concrete, empirically measurable definition of AGI. The closest we have is Humanity's Last Exam, which LLMs are already well on the path to acing.
Consider this: Being born/trained in 1900 if that were possible and given a year to adapt to the world of 2025, how well would an LLM do on any test? Compare that to how a 15 years old human in the same situation would do.
I'd expect it to be generalised, where we (and everything else we've ever met) are specialised. Our intelligence is shaped by our biology and our environment; the limitations on our thinking are themselves concepts the best of us can barely glimpse. Some kind of intelligence that inherently transcends its substrate.
What that would look like, how it would think, the kind of mental considerations it would have, I do not know. I do suspect that declaring something that thinks like us would have "general intelligence" to be a symptom of our limited thinking.
provided they continue to go through with the investment
I recall the Foxconn Wisconsin situation, and I have no doubt Apple et al are well aware of it. String out a pretense of building factories in the US for the next three and a half years? Easy peasy. President Trump will soon get bored of this game anyway and move on to the next one; he already looks like he's bored of it and it didn't bring him universal acclaim and admiration.
Indeed. While the UK and the USA have comparable levels of dental health, in US television actors typically require very good (or rather, cosmetically appealing according to local norms) teeth in order to succeed. In the UK, it's less important.
Not just comparable, UK is actually a bit higher. The difference is the NHS doesn't cover anything cosmetic, so they are very healthy teeth but they look rubbish unless you're lucky.
Genuine question that may come across as snark, but I promise it's not!
"you don't grow up until you have to balance a check book."
Is that just shorthand for "be properly aware your own budget and finances" or are there literally still enough people using check (cheque as I might write) books in the US that this is still a literal task people learn.
It's a figure of speech. I don't literally write checks.
But getting my first apartment at 19 was definitely a great experience. I didn't have my parents paying my rent. Either I was going to figure it out or get evicted a 3rd time.
It was much easier back than, my rent was around 40% of a minimum wage income. Now the same apartment is closer to 70%.
in Santa Barbara, California.. working professionals with credentials and a college degree cannot afford (or find?) an ordinary rental unit. The source of that is a news article that said they may not be able to bring newly graduated doctors or young police to their city.
To be abundantly clear, this was a dingy crummy apartment in a rough neighborhood. For the first few months I lived there I literally would hide my monitor whenever I left the house.
After a while, I grew to love it. I left the monitor out because I was tired of being afraid all the time, I hugged the floor since it meant the next eviction would be up to me.
I have so many fond memories of 2009/2010 KTown, such a great place to live on a budget.
It's gone now, you need a six-figure income to live there like anywhere else in Los Angeles
I'm in my early 40s; we very occasionally have to still write checks. Most recently, it's how we've paid for house repairs that were too expensive to feel comfortable paying in cash for.
I'm 42 and in the last 10 years, I have written two checks: One to a dentist who didn't have an online billing portal, and one to a car wrapping business who charged an extra 3% to pay with a credit card.
Even the $6,000 water main replacement and $24,000 roof replacement allowed me to pay through ACH online and so I didn't have to write a physical check.
EDIT: Actually, 10 years would include the last few months I lived in an apartment. I paid my rent with a physical check back then. If I lived in an apartment today, if they accepted online payments of some sort without an extra fee, I'd do that.
Yes there are people still writing checks in the US. I pay my rent by check (paying by bank transfer costs a fee which I refuse to pay).
However I have never used the balance recording part of the checkbook and I think it's completely unnecessary if you know how to operate a computer.
There are likely some older people who still balance their checkbooks but I never was explicitly taught to do it. Instead I see it used as a kind of metaphor for personal budgeting in general.
Agreed. I was taught (and forced to) balance my checkbook as a youth, but starting around 2006 when I switched to a bank with a website/portal it became essentially busy work. I would still probably recommend doing it (or an equivalent personal tracking) if you run balances near zero so you don't overdraw/overdraft, but for most people it's only a 30 second exercise to pull up the website or app and check your balance. If you're worried about bank errors then maybe self-balancing might be worth it, but that's not something I worry about and even if I did I don't expect my personal balance to be accepted as any sort of evidence.
Since it became possible to deposit a cheque by photographing it from a banking app, there are also fewer surprises from the other side: the busy plumber might not find time to drive to the bank during opening hours, but will pay in any cheques they receive the same day through the app.
(From what I remember of my parents discussing this, they would be annoyed if someone they'd paid by cheque didn't deposit it promptly. This was probably before online banking, but when they could conveniently check their balance at an ATM in most supermarkets.)
Its more than just balancing the check book. The reason's behind why you do it are many. One is in creating a documentation path of evidence you can fall back to if there are any discrepancies, which do happen both as mistakes/errors as well as maliciously.
Rare as they may be, having a teller add an extra 0 to the end of a $100 payment when processing is enough for most people to find out the hard way why you need to do these things a certain way.
If the bank refuses to correct the issue, you can take them to small claims court and use your documentation as evidence in support along with other evidence such as invoices, etc. Even memorializing something and sending a copy to the person for their documentation can be considered implicit acceptance when they do not respond which bad actors often will not.
If you don't have this, its your word versus theirs. A he-said she-said, and there are presumptions in law that may force you to pay if you do not know.
Things like failing to dispute amounts/demands for something owed within a certain period of time, may make you liable for remedying a non-existent credit/debt when it goes before a judge.
I'm 26 from California, I have never had a single occasion to write a check in my life. Neither have any of my peers to my knowledge, but it naturally doesn't come up a lot.
To pay for things, you obviously must use something.
What you likely do not realize is there are additional protections inherent with checks that you likely do not have with what you are using.
Payment systems often force you to pay a percentage of the total as a fee, just like using your ATM card at a out of network ATM will impose a fee both at the ATM, but on your banks side as well.
That's more than you would pay than if you provided a check.
There are also many things inherent in the processes involving cashing checks that protect you from various forms of fraud, and the documentation you generate (with carbon copies) can be used in court to support claims as well when bad actors lie.
By utilizing something else because its convenient, you effectively sign away protections you didn't know you had.
Not knowing the how's and why's, and not being taught, becomes detrimental to your future without you even realizing it.
Every so often, it saves me a few hours on a task that's not very difficult, but that I just don't know how to do already. Generally something generic a lot of people have already done.
An example from today was using XAudio2 on windows to output sound, where that sound was already being fetched as interleaved data from a network source. I could have read the docs, found some example code, and bashed it together in a few hours; but I asked one of the LLMs and it gave me some example code tuned to my request, giving me a head start on that.
I had to already know a lot of context to be able to ask it the right questions, I suspect, and to thence tune it with a few follow up questions.
The biggest timesaver for me so far is composing complex SQL queries with elements of SQL I don't use very often. In such cases I know what I want, but the specific syntax eludes me. Previously solving that has required poring over documentation and QA sites, but finding the right documentation and gradually debugging is tedious. An LLM gets me farther along.
Same here. I had to write a DynamicObject for a DSL-like system in C# to make it behave like Python dicts.
With some LLM help I was done before lunch. After lunch I wrote some additional unit tests and improved on the solution - again with LLM help (the object type changes in unit tests vs integration tests, one is the actual type, one is a JsonDocument).
I could've definitely done all that by myself, but when the LLM wrote the boilerplate crap that someone had definitely written before (but in a way I couldn't find with a search engine) I could focus on testing and optimising the solution instead of figuring out C# DynamicObject quirks.
Seeing Bezos advocate for free markers is a massive dissonance, until I remind myself that he hasn't said what they should be free of. Bezos would like to be free to run competition out of town, force brutal terms on suppliers, all that sort of thing.
“Free markets mean the best product always wins out. It just so happens my subsidiaries make the best product in all markets. Read about why in the Washington Post.” - Bezos, probably
Indeed. Other commodities have other strengths; oil can be turned into a huge range of products that make people's lives better, wheat can be literally used to make food to radically enhance people's lived experience, aluminium is a key component in an enormous range of goods covering almost every aspect of life. If what Bitcoin can offer isn't having any actual application that improves anyone's life but is easy to move around, well, gotta go with your only strength, I guess.
reply