Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have a 20 ton Takeuchi and I don’t recall any feedback in the controls at all. The feedback I use is from the seat and sounds of the machine - well besides of course visual of course.

I cannot imagine this being useful to me unless the virtual operators cab closely mimicked an actual machine. It would have to have audio from the machine and be on a platform that tilted relative to the real thing. It would also need 270 degrees of monitors with a virtual mirror to see behind. On the front monitor, minimally, would need the to see vertically up and down too.

I also imagine all of this would be more useful to seasoned operators who can do most things on excavators in their sleep (definitely not me lol)



The way I think about this - we should not have multi screens. Human field of vision is 60 degrees for central and about 120 degrees binocular. The bucket of the excavator is way narrower than this which means actual task doesn't require wide vision.

So if we are able to have really good autonomous safety layers to ensure safe movements, and dynamically resize remote teleop windows, you'd make the operator more efficient. So while we stream 360 degree view, we get creative in how we show it.

That's on the vision side. We also stream engine audio, and do haptic feedback.

Takeuchi are interesting! Rare ones to have blades even on bigger sizes - is that why you got one?


Just a suggestion from someone who's worked on industrial robots and autonomous vehicles, but I think you're underselling a lot of difficulties here.

Skilled humans have a tendency to fully engage all of their senses during a task. For example, human drivers use their entire field of vision at night even though headlights only illuminate tiny portions of their FoV. I've never operated an excavator, but I would be very surprised if skilled operators only used the portion of their vision immediately around the bucket and not the rest of it for situational awareness.

That said, UI design is a tradeoff. There's a paper that has a nice list of teleoperation design principles [0], which does talk about single windows as a positive. On the other hand, a common principle in older aviation HCI literature is the idea that nothing about the system UI should surprise the human. It's hard to maintain a good idea of the system state when you have it resizing windows automatically.

The hardest thing is going to be making really good autonomous safety layers. It's the most difficult part of building a fully autonomous system anyway. The main advantage of teleop is that you can [supposedly] sidestep having one.

[0] https://doi.org/10.1109/THMS.2014.2371048


I definitely agree with you - recreating the scene in teleop is challenging. In excavators, however, it does make it better. An excavator has huge blindspots on the right (due to arm), to the back and sometime near the bucket. Hence the workers who are hired to stand around (banksmen, spotters, signalmen) and signal to the operator.

It's like driving a Ford F150 without backup camera. You'd add the backup camera upfront, and not display the back view at the back window.

It's definitely challenging and we're far from something that's perfect. We're iterating towards something that's better everyday.


Yeah, it sounds like a fun challenge. Hope you have lots of success tackling it


Well sure if you are just looking at where the bucket is digging but there is often a dump truck sitting on either your right or left flank waiting for what’s in your bucket (don’t forget the beep button lol). Having a monitor to either side duplicates what you are seeing out of your peripheral vision when operating the real thing. Would make transitioning from real to virtual much easier and imho safer.

Yes that is precisely why - makes for a much more versatile machine. TB180FR - it’s med-small, about 10 ton.


I think swinging (which is about 40% of dig and dump workflow by time spent) should not be manual. That's lowest levels of autonomy which requires roughly centering to the pit/truck which we have already achieved. Hence operator only has to look in front!

Those workflow numbers come from multiple observations at different sites, one of the examples is this: https://www.youtube.com/watch?v=orEsvu1CS64

I wish to talk to you because it's rare sight someone has a Takeuchi - is there a way to connect? My email is contact at useflywheel dot ai


>>I think swinging should not be manual.

I disagree and here's a couple of reasons why I say that:

1. What am I going to do with the time between releasing control and regaining it from the autonomous control?

2. In that break of workflow my first thought is it will cause a break in my concentration.

3. When I am swinging back from the truck to the trench the bucket is naturally in my control. It seems that in autonomy mode the transition from autonomous to my control would be very unnatural and choppy. I suppose with time it would be okay but man seems to violate the whole smooth is fast concept.

I'll shoot you an email.


> I think swinging (which is about 40% of dig and dump workflow by time spent) should not be manual.

It's been over a decade since I last operated an excavator, so grains of salt as usual - but I'd say it should be manual, or at least semi-automated. You need to take care where you unload the bucket on a truck, to avoid its weight distribution being off-center, or to keep various kinds of soil separated on the bed (e.g. we'd load the front with topsoil and fill the rear with the gravel or whatever else was below.


I agree - the dumping and digging itself (where you move boom, arm, bucket much more than swing/tracks) should be manual. But swinging to the truck and back to pit (pure swinging motion to center around these areas of interest) do not have to be manual. I agree with your and other comments that the transition has to be smooth and that's something we're working on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: