Markets are not as efficient as the textbooks would have you believe. Investors typically rely on a fairly small set of analysts for market news and views. It might take those guys a while to think about stuff, write a note etc. The deepseek crash last year lagged by several days as well.
I'm out of the loop, but I thought there were sophisticated automated trading algorithms where people pay to install microwave antennas so they can have 1ms lower latency. And I thought those systems are hooked up to run sentiment analysis on the news. Maybe the news is late?
That is generally only applicable to extremely momentary arbitrage opportunities. There's still a lot of automation though, but it's pretty boring. It's basically look at the news and make a recommendation to a fund manager or something, and various competing vendors of such, down to consumer products like that.
- Data centres need a lot of power = giant vast solar panels
- Data centres need a lot of cooling. That's some almighty heatsinks you're going need
- They will need to be radiation-hardened to avoid memory corruption = even more mass
- The hardware will be redundant in like 2 years tops and will need replacing to stay competitive
- Data centres are about 100x bigger (not including solar panels and heat sinks) than the biggest thing we've ever put in space
Tesla is losing market share (and rank increasingly poorly against alternatives), his robots are gonna fail, this datacentre ambition needs to break the laws of physics, grok/twitter is a fake news pedo-loving cesspit that's gonna be regulated into oblivion. Its only down from here on out.
Maybe instead of housing life, civilizations develop Dyson's spheres to house data centers. Solar panels on the interior, thermal radiators on the exterior and the data centers make up the structure in between. Combine that Von Neumann probes and you've got a fun new Fermi paradox hypothesis!
Don't combine it with von Neumann probes and you've solved the Fermi paradox: a civilization that puts that much work into computing power is either doing the equivalent of mining crypto and going nowhere, or is doing AI and is so dependent on it that they inevitably form a vast echo chamber (echo sphere?) that only wants to talk to itself (itselves?) and can't bear to be left out by adding the latency unavoidably added by distance.
tl;dr: civilizations advanced enough to travel between stars end up trapped by the resources and physics required to keep up with the Joneses.
The calendly thing is interesting. I get a fair amount of prospecting outreach on LI, and if anyone asks me to book a slot myself, it's an instant no. I feel like you can't be bothered to actually engage with me, that you are not prepared to do this tiny bit of work. It almost feels disrespectful?
unless you already have the gear and a subscription, not sure how an Iranian citizen can get starlink set up: starlink doesn't ship there, so needs to be individually imported, plus will need to be paid for by a debit/credit card from a non-sanctioned country
yah but not sure how someone in Iran can actually get the hardware shipped to them (I just tried Tehran as delivery, and starlink website said "no"), and also would need a bank or credit card from a non-sanctioned country to be able to actually pay for it
As I updated my thinkpad to 32 GB of RAM this morning (£150) I remembered my £2k (corporate) thinkpad in 1999, running Windows 98, had 32 MB of RAM. And it ran full Office and Lotus notes just fine :)
I am absolutely AGOG to know why this has to be a separate device. It must involve hardware and/or instrumentation not built into smartphones. Microwave scanner? mini x-ray machine? neutrino detector??? what could it be
The market is ripe for ChatGPT in a box, replacing google home or Alexa desktop pucks. God knows the google home assistant has been detuned and detuned to the point it barely works for turning the lights on and off at this point. There's a handful of golf-ball shaped objects on AliExpress for $25 that provide this functionality, powered by an ESP32 IoT chip, but doesn't have wakeword capability (yet). I picked up two for a Home Assistant voice assistant project but haven't had time to dive into it yet.
I've got codex-cli with speech-to-text hooked up to (among other things) Home Assistant via MCP.
It'll do anything. I can literally tell it to play some music from a playlist and make the lights flash to the beat, and it'll just figure out how to do that.
Is it fast? Not really. Is it annoyingly slow for quick tasks like turning the lights off? Not too annoying anyway. Turning the lights on/off takes about 4 seconds from when I finish speaking.
Because Apple won’t give you access to what you need as a dev for this kind of thing on iPhone: always-on audio listening to multiple streams : ambient sound, my voice, whatever is playing in my headphones … think an AI assistant listens to audiobooks together with you and allows you to ask questions / lookup things etc …
reply