Interesting, a cool resource for an API endpoint for AIS data so aisstream.io. Seems quite solid. Any one any idea of a good resource for satellite AIS data - I feel like the EU probably funded it and I can’t find anything on capricious etc.
Super interesting to hear your experience, I agree that it is very dystopian. I have put up with it (with effect on my performance and somewhat my mental health), to be around my family more. Things like doing pick up and drop offs at school consistently has been wonderful.
Amazing story congrats on being a great coach - this kids will remember you and that experience.
My experience on tech as a parent (3 kids under 10), I find their time on iPads etc playing games, music and audio books to be good for them (they don’t get grumpy after it, and particularly playing Roblox with their friends online is great fun - real halo 3 vibes for me), watching shows they get quite difficult after if the have watched for extended periods (smaller the screen the worse it is), but if they get access to anything with a constant scroll / stream of things they go haywire. My son found YouTube on his nanas iPad and mainlined it for half an hour and then went crazy. My daughter lost it over browsing Amazon.
We are withholding social networks & scrolling video as long as humanly possible, but difficult when you don’t want them to miss out on anything, and there’s an element of controlled exposure…
Again great story, makes me want to sign up as a coach. Sorry for the tangent!
He would have understood the potential of LLMs straight away. If he would have delivered a successful product no one knows but 100% would have understood how LLMs could become another interface to computers.
I think his pitch would have been like garage band but for making apps “now anyone can make an app” feels very Jobsian.
I suspect Jobs would have held off for a few years just to see where the road is going before making a move. In that sense, Apples current lack of real dedicated to move into the LLM space has become a happy accident. They have avoided the hype cycle and the potential blow back once the more exuberant part of the AI craze fade away.
Why are you sure it was an accident? Couldn't the people who rose up the ranks around his orbit have learned the right lessons, and this having been very intentional?
I don’t think so, I think he would have thought Apple could deliver a better user experience (LLM Siri) and gone into it whole heartedly.
My perspective is that Apple doesn’t hold off to see what the market is doing. They hold off until technology can create a viable, usable product (iPhone was ahead of anything, Vision Pro less impactful but similarly advanced, iPod was smaller than any hdd based mp3 player - just some examples where they pushed the boundaries).
They 100% missed the transformer being a technology that could create a viable product, and I don’t think Steve Jobs would have missed that.
He for sure would loathe their current state though. They are a perfectionist's nightmare and detrimental to many brands. He wouldn't even allow Apple^Red for his friend's charity- there's no way he would let AI slop come from Apple products.
Do you think that Anthropic don’t include things like this in their harness / system prompts? I feel like this kind of prompts are uneccessary with Opus 4.5 onwards, obviously based on my own experience (I used to do this, on switching to opus I stopped and have implemented more complex problems, more successfully).
I am having the most success describing what I want as humanly as possible, describing outcomes clearly, making sure the plan is good and clearing context before implementing.
I think that Claude can figure that out though - through the conversation Claude should figure out if this is an enterprise app, a hobby app or whatever, as it decides on how to code the thing.
They never parsed your prompt. The magic word reduces the probability that the token corresponding to the end of chain-of-thought will be emitted, which increases test-time compute.
I don’t get this, I’d rather talk about money upfront, I have a number I know I need to be at and I ask for at least that. Imagine spending the time and energy interviewing etc and then finding out that salary is far far below expectations?
My thought too, HR’s only care so far as they have a range they need to stick to. I don’t like to be too specific but I usually suggest my salary, possibly higher, as a minimum
More straightforwardly, people are generally very forgiving when people make mistakes, and very unforgiving when computers do. Look at how we view a person accidentally killing someone in a traffic accident versus when a robotaxi does it. Having people run it on their own hardware makes them take responsibility for it mentally, so gives a lot of leeway for errors.
I think that’s generally because humans can be held accountable, but automated systems can not. We hold automated systems to a higher standard because there are no consequences for the system if it fails, beyond being shut off. On the other hand, there’s a genuine multitude of ways that a human can be held accountable, from stern admonishment to capital punishment.
I’m a broken record on this topic but it always comes back to liability.
Traffic accidents are the same symptom of fundamentally different underlying problems among human-driven and algorithmically-driven vehicles. Two very similar people differ more than the two most different robo taxis in any given uniform fleet— if one has some sort of bug or design shortcoming that kills people, they almost certainly all will. That’s why product (including automobile) recalls exist, but we don’t take away everyone’s license when one person gets into an accident. People have enough variance that acting on a whole population because of individual errors doesn’t make sense— even for pretty common errors. The cost/benefit is totally different for mass-produced goods.
Also, when individual drivers accidentally kill somebody in a traffic accident, they’re civilly liable under the same system as entities driving many cars through a collection of algorithms. The entities driving many cars can and should have a much greater exposure to risk, and be held to incomparably higher standards because the risk of getting it wrong is much, much greater.
reply