of course not, none is considering fusion as a reliable workable solution for the 2050 net zero goals. I mean they don't even consider opting out gas and oil until 2040s or even later lol
yes it is an interesting read, however you should know that some of their "facts" are highly speculative and some were proven wrong in the last decade.
Yes there's very good theoretical reasons for skip connections. If your initial matrix M is noise centered at 0, then 1+M is a noisy identity operation, while 0+M is a noisy deletion... It's better to do nothing if you don't know what to do, and avoid destroying information.
I appreciate the sibling comment perspective that memory pressure is a problem, but that can be mediated by using fewer/longer skip connections across blocks of layers.
I don't know, it is an opinion and it seems kind of well-founded (i.e. there's no evidence for groundbreaking research on OpenAI part except for scaling things up).
It seems to me that the breakthrough claim was a desperate attempt of OpenAI staffs to get their beloved boss back to OpenAI. If anything it most probably will be incremental things rather than a breakthrough but definitely not a bad things but just call a spade, a spade [1]. Ironically the breakthrough happened somewhere else in Google and for some unknown reasons as of now, Google has missed the boat of commercializing its very own invention.
Personally I have been recently have been using the ChatGPT-4 on a daily basis, and I considered myself a long time and ardent user of Google products (mainly search) for the past two decades. However it becomes increasingly frustrating for the past several years since it is getting more difficult to perform "search that matters" (going to trademark this motto). What frequently irked me the most is not when doing random or targetted search with Google, is when looking for something that you discovered previously but it's forgotten and apparently it does really matter now. It was quoted that about 10% of our time looking for items that we lost or misplaced, and the same can be said to our acquired knowledge and information. Some times we crave so much for the knowledge or the info we had but very difficult or impossible to recall. ChatGPT-4, in particular the online search with Bing feature, is extremely useful in this regard but at the same time is very limited since it is not a well supported features with many failed attempts, perhaps due to limited online data scrapping capability, and thus sub-par results. This specific feature, I call it search that matters, is like Google search with steroid and the fact that Google, with its Deepmind subsidiary, failed to utilize and monetize on this opportunity until now is just beyond me. If ChatGPT, call it ChatGPT-5 if you like, can perform this operation intuitively and seamlessly it will be a game changer but not a breakthrough. Apparently according to ChatGPT-4 you can have a game changer not a breakthrough.
The breakthrough, however, is to fundamentally improve AI or LLM itself as mentioned by Stephen Wolfram in his tutorial article on ChatGPT, not merely enhancing its existing operations [2]:
When it comes to training (AKA learning) the different “hardware” of the brain and of current computers (as well as, perhaps, some undeveloped algorithmic ideas) forces ChatGPT to use a strategy that’s probably rather different (and in some ways much less efficient) than the brain. And there’s something else as well: unlike even in typical algorithmic computation, ChatGPT doesn’t internally “have loops” or “recompute on data”. And that inevitably limits its computational capability - even with respect to current computers, but definitely with respect to the brain.
[1] Eight Things to Know about Large Language Models:
I don't understand why investing money in something that it is reliable and not requiring major reworks on the electric grid would be beneficial, incredible! I am more baffled than you.
also, in case you didn't know sweden doesn't have a huge amount of sun (especially in winter), wind is more affordable but you can't rely on a unique energy source.
what a laughable sentence, there have been at least 2 near misses last year between starlink satellites and Tiangong. Also, if there's a collision, the smaller parts and debris would remain in orbit far longer than expected (hence causing more incidents).
It is not laughable, Starlink satellites are in a low enough orbit that they will automatically deorbit due to air drag. They will not contribute to Kessler Syndrome.
>SpaceX said that a large part of Starlink satellites are launched at a lower altitude of 550 km (340 mi) to achieve lower latency (versus 1,150 km (710 mi) as originally planned), and failed satellites or debris are thus expected to deorbit within five years even without propulsion, due to atmospheric drag.