Hacker Newsnew | past | comments | ask | show | jobs | submit | shadowgovt's commentslogin

Disney's had a notable amount of success with that formula. Turtle Talk with Crush arguably saved The Living Seas pavilion space at EPCOT, and it's executed via a digital puppet operated by a behind-the-scenes cast member doing their best Crush voice.

I sincerely doubt that what makes that experience magical can be replicated with AI in my lifetime. Too much contextual knowledge, too much detail in the nuances of human-human interaction, and too much je ne sais quoi in the timing of getting humor right. I've seen Turtle Talk deal with a particularly excited young person leaning on Crush's "tank" by having Crush look at him and go "Hey little minnow... One of the big humans behind you is gonna come scoop you up. I've seen it happen, lots of times!" You can program that interaction in, but the domain-space of having an interaction for every possible "improv moment" might be outside the bounds of what the next several generations of learning models are capable of.

... or I'm wrong, in which case I look forward to enjoying robo-Seinfeld in the retirement home.


This is one of those situations where that's legitimately difficult. Kevin Perjurer is quite a good documentarian, and there's very little trimmable fat on the four-hour product if you want to keep in all the points he made.

gkoberger's peer comment is a pretty good summary. Another interesting point is that these technologies can benefit the brand bottom-line even when they don't make it into the park, because part of Disney's brand is "tomorrow today." Even when things are one-offs, they become one-offs that people stitch into the legend of the parks (in both the retelling and in their own memories), which gives them a larger-than-life feel; your visit might not include one of the "living characters," and statistically it probably won't.

... but it might. And if it does, you'll never forget it.

Personal anecdote / example: I stopped in at the "droid factory" in the Star Wars: Galaxy's Edge area of Disney World a few years back. They had several bits of merch for sale including one life-size R2-D2, inert. I took a close look at the R2 because it was an impressive bit of work. Turned around to look at a rack of t-shirts. And was, therefore, startled as hell to hear a bwoop behind me, turn around, and see that it had followed me out of its charging receptacle and was staring at me. It was not at all inert; it was a very impressive operational remote-control replica.

The cast member behind the counter was doing his best to hold down his grin and not give me a "GOTCHA" look. He has to, because you never know what kids might be watching and he doesn't want to break the magic. And... Yeah, he got me good. "That time I was at Disney World and R2-D2 followed me around the t-shirt shop" is gonna stick with me.


I saw a video of someone who bought one of these (iirc from Home Depot limited sale)... and it definitely looks impressive, though a few minor flaws. I've seen a handful of R2D2s at conventions over the years, and they're always pretty cool... while a BB8 might be technically more impressive, I just don't care for the character nearly as much.

Everything about this chassis strongly suggests no guest touching will be allowed.

In addition to the points you've highlighted, the examples in the video and the images of the character strongly suggest it'll be a soft outer shell. I'd be more worried about a kid shoving it finding themselves caught by an internal pinch-point than damage to the robot.


This is the standard I use as well. In general, my rule of thumb is that if something is logging error, it would have been perfectly reasonable for the program to respond by crashing, and the only reason it didn't is that it's executing in some kind of larger context that wants to stay up in the event of the failure of an individual component (like one handler suffering a query that hangs it and having to be terminated by its monitoring program in a program with multiple threads serving web requests). In contrast, something like an ill-formed web query from an untrusted source isn't even an error because you can't force untrusted sources to send you correctly formed input.

Warning, in contrast, is what I use for a condition that the developer predicted and handled but probably indicates the larger context is bad, like "this query arrived from a trusted source but had a configuration so invalid we had to drop it on the floor, or we assumed a default that allowed us to resolve the query but that was a massive assumption and you really should change the source data to be explicit." Warning is also where I put things like "a trusted source is calling a deprecated API, and the deprecation notification has been up long enough that they really should know better by now."

Where all of this matters is process. Errors trigger pages. Warnings get bundled up into a daily report that on-call is responsible for following up on, sometimes by filing tickets to correct trusted sources and sometimes by reaching out to owners of trusted sources and saying "Hey, let's synchronize on your team's plan to stop using that API we declared is going away 9 months ago."


It seems that the easier rule of thumb, then, is that "application logic should never log an error on its own behalf unless it terminates immediately after", and that error-level log entries should only ever be generated from a higher-level context by something else that's monitoring for problems that the application code itself didn't anticipate.

Right. If staging or the canary is logging errors, you block/abort the deploy. If it’s logging warnings, that’s normal.

Unless it is logging more warnings because your new code is failing somehow; maybe it stopped parsing the reply correctly from a "is this request rate limited" service so it is only returning 429 to callers never accepting work.

At some point, "novel and innovative" becomes Rube Goldberg, not I.M. Pei.

I think for software engineering, the far more common issue is that there's already a best practice and the individual engineer hasn't chanced to hear about it yet than the problem on the desk is in need of a brand-new mousetrap.


My personal goal has been to dig that grave ever since I could hold a shovel.

We've always been in the business of replacing humans in the 3-D's space (dirty, dangerous, dull... And to be clear. data manipulation for its own sake is dull). If we make AI that replaces 90% of what I do at my desk every day... We did it. We realized the dream from the old Tom Swift novels where he comes up with an idea for an invention and hands the idea off to his computer to extrapolate it, or the ship's computer in Star Trek acting like a perfect engineering and analytical assistant to take fuzzy asks from humans and turn them into useful output.


The problem is that this time, we're creating a competing intelligence that in theory could replace all work, AND, that competing intelligence is ultimately owned/controlled by a few dozen very rich guys.

They aren't going to willingly spread the wealth.


This is a key insight.

In my day job, I use best practices. If I'm changing a SQL database, I write database migrations.

In my hobby coding? I will never write a database migration. You couldn't force me to at gunpoint. I just hate them, aesthetically. I will come up with the most elaborate and fragile solutions to avoid writing them. It's part of the fun.


I have definitely had Claude make recommendations that gave me structural insight into the code that I didn't have on my own, and I integrated that insight.

People who claim "It's not synthesized, it's just other people's work run through a woodchipper" aren't precisely right, but they also aren't precisely wrong... And in this space, having the whole ecosystem of programmers who published code looking over my shoulder as I try to solve problems is a huge boon.


Herding goats doesn't solve the interesting technical problem I'm trying to solve.

Point is: if that problem is solvable without me, that's the win condition for everyone. Then I go herd goats (and have this nifty tool that helps me spec out an optimal goat fence while I'm at it).


> Point is: if that problem is solvable without me, that's the win condition for everyone.

The problem is solvable without you. I don't even need to know what the problem actually is, because the odds of you being one of the handful of the people in the world who are so critical that the world notices their passing is so low, I have a better chance of winning a lottery jackpot than of you being some critical piece of some solution.


I completely disagree - I think it’s the other way around.

Solving the problem - no matter what problem it is - is extremely dependent on you and every single human being (or animal for that matter) is a critical piece of their environment and circumstances.


I've also heard similar arguments about "Using stackoverflow instead of RTFM makes you a bad programmer."

These things are all tradeoffs. A junior engineer who goes to the manual every time is something I encourage, but if they go exclusively to only the manual every time they are going to be slower and produce code more disjoint and harder to maintain than their peers who have taken advantage of other people's insights into the things the manuals don't say.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: