The names of some birds still reflect this. For example, the ring-necked duck has a ring around its neck that’s almost impossible to see a on a live bird.
We’re the team that designs and develops the operating system for the Secure Enclave used in iOS, tvOS, watchOS, and macOS devices. We develop the full software stack, including the L4 microkernel, runtime libraries, hardware drivers, and more. We work very closely with Apple’s Silicon Engineering Group to help design the Secure Enclave hardware.
This is a great place to work if you’re into some combination of embedded, operating systems, and security.
We’re the team that designs and develops the operating system for the Secure Enclave used in iOS, tvOS, watchOS, and macOS devices. We develop the full software stack, including the L4 microkernel, runtime libraries, hardware drivers, and more. We work very closely with Apple’s Silicon Engineering Group to help design the Secure Enclave hardware.
This is a great place to work if you’re into some combination of embedded, operating systems, and security.
Well, the position advertised is embedded programming in C/C++. Very much hardware-oriented. It's an area I'd love to get into but my career has evolved to be mostly automation-related Python coding. I don't routinely write enough C to claim expertise any more. Positions like this (especially at big well-known companies like Apple) also require a lot of domain experience, which can be difficult to get without pushing reset on your career and starting at the bottom again.
We’re the team that designs and develops the operating system for the Secure Enclave used in iOS, tvOS, watchOS, and macOS devices. We develop the full software stack, including the L4 microkernel, runtime libraries, hardware drivers, and more. We work very closely with Apple’s Silicon Engineering Group to help design the Secure Enclave hardware.
This is a great place to work if you’re into some combination of embedded, operating systems, and security.
Part of the issue may have been that the plane had slowed down so much that the stall warning stopped (it disengages below a certain airspeed apparently). When he stopped pulling up, the plane sped up and the stall warning started again. Pull up again, plane slows down, stall warning stops.
I wonder if something about this system was changed after that incident - why not keep sounding the stall alarm if the plane ends up outside the flight/sensor envelope? Can’t you assume that it didn’t magically cross the stall zone back into normal flight?
AF 447 wasn’t all that different from this situation. One of the co-pilots was trying to pitch the nose down to recover from the stall. The other was panicking and trying to pitch up. The plane averaged their inputs, without giving feedback via the stick that this was happening. It wasn’t until very late in the flight that they figured out what was happening, and then it was too late to recover.
Obviously there was some significant pilot error in this case, but a big contributor mag have been that the pilot who was trying to correct the stall didn’t understand that the plane was ignoring his input because of the averaging.
In April 2012 in The Daily Telegraph, British journalist Nick Ross published a comparison of Airbus and Boeing flight controls; unlike the control yoke used on Boeing flight decks, the Airbus side stick controls give little visual feedback and no sensory or tactile feedback to the second pilot.
Ross reasoned that this might in part explain why the pilot flying's fatal nose-up inputs were not countermanded by his two colleagues.
In a July 2012 CBS report, Sullenberger suggested the design of the Airbus cockpit might have been a factor in the accident. The flight controls are not mechanically linked between the two pilot seats, and Robert, the left-seat pilot who believed he had taken over control of the aircraft, was not aware that Bonin continued to hold the stick back, which overrode Robert's own control.
That suggest there was only ever one pilot flying and the way that pilot reacted to the situation had a big part to play in the final crash.
> That suggest there was only ever one pilot flying
"Pilot flying" is a human-factors title, not a software function-lock. It just indicates who has control responsibility at that moment but it is not enforced by technical means.
It is intended to eliminate ambiguity in crew functions; the PF can be a newbie copilot even if the commander of the aircraft is a 30-year-service Captain who would become the PNF at that point. Its all part of Crew Resource Management theory.
There should only be one PF in a cockpit at any one time, precisely to avoid the situation that arose with the Air France flight where the computer was receiving inputs from two pilots.
I was responding to the claim the flight control was averaging the two pilot inputs, because if that was the case then two pilots would have been flying the plane.
Might point was I doubt that this was in fact happening and there was only ever one pilot in charge.
> the Air France flight where the computer was receiving inputs from two pilots.
The link and quotes I posted suggest that was not happening.
The system was just ignoring the other pilot (and that was the designed fault) because it also failed to tell that other pilot he was being ignored.
Thanks for the link. It is a very interesting read.
In particular it also says this:
To avoid both signals being added by the system, a priority P/B is provided on each stick. By pressing this button, a pilot may cancel the inputs of the other pilot.
Yes, indeed, I have not found any reliable source for the claim that both pilots were making significant stick inputs simultaneously for any extended period of time.
You may be right about the averaging. From rereading the accident report, the Pilot Flying took back control of the plane after the Pilot Not Flying engaged his controls and tried to pitch down.
But, it’s the same basic idea. The PNF thought he’d gotten control of the plane, and didn’t understand why his input wasn’t having an effect. He didn’t get feedback from the stick telling him a different input was being honored. And neither pilot appears to have been fully aware that they were in a flight control mode where there was a risk of stalling. The PF especially never seemed to have made that connection, and the PNF took a fairly long time to call it out. As a result, the PF may not have been aware that he needed to actively keep the angle of attack inside the flight envelope.
So, PNF tries to pitch down, but isn’t aware the plane got put back into a mode where he isn’t in control. PF is pitching up, but isn’t aware the plane switched to a mode where this could lead to a stall. That’s the similarity I was getting at.
From the reported control traces, there was no prolonged period of dual input. There were 3 or so brief moments of dual control input (1 - 2 seconds), during which a warning was sounded. The pilots never spoke out loud about it, but we can infer that they heard the dual input warning and were aware when it happened because the sequence of events was the same each time; inputs from both joysticks received -> aural dual input warning -> input from one joystick stops.
Something about the idea of two pilots inadvertently fighting each other for control of the aircraft has definitely caught peoples’ imagination. But it didn’t happen.
I used to work in the industrial controls industry. The systems are often designed by application engineers working for the industrial control equipment’s manufacturer or distributor. In the case of distributors, the engineering work is often provided for “free” and paid for with the markup the distributor applies over their cost to purchase the components direct from the manufacturer. Those same application engineers will be involved with helping to make the sale. If a customer asks “can I connect this to the Internet?”, any response other than “of course!” is liable to result in a talking to from the sales manager for that account.
I feel like I’m missing something here. An infoleak is required to successfully ROP against ASLR (otherwise the attacker doesn’t know what to overwrite the return address with). Once an infoleak is available, the address of the stack can be leaked. I’m not really sure this does much beyond requiring attackers to modify their existing exploits.
It increases the complexity of the attack. Usually, stack cookies make ROP harder these days but guessing the cookie only has a complexity of 8*256 (on OpenBSD), but xor'ing the return address with another value does increase the complexity even more. And that might be good news for programs that use fork a lot (like nginx) and hence don't get a refresh for ASLR/stack cookies for every request (like e.g. sshd on OpenBSD does [which does does fork/exec to ensure ASLR/cookies are refreshed]).
OpenBSD has been expanding the fork+exec model throughout its source tree, since the OpenSSH preauth work done by Damien Miller, many more followed. The list includes bgpd/ldpd/eigrpd/smtpd/relayd/ntpd/httpd/snmpd/ldapd and most recently slaacd & vmd.
A few remain but are being converted as they are discovered.
How does that work? Should the kernel walk the stack to change all the saved cookie values of the forked copy? I doubt the kernel even knows where the saved cookie values are stored on the stack. Also, that would make fork quite slow, depending on how the deep the stack was when the fork happened.
The post-fork canary value could be paired with the stack pointer at which it became valid. If not valid, the process could walk a linked list of pre-fork canary and stack pointer pairs, to find the correct value to use. Would be interesting to see the performance hit on such an approach.
"ROP attack methods are impacted because existing gadgets are transformed to consist of '<gadget artifacts> <mangle ret address> RET'. That pivots the return sequence off the ROP chain in a highly unpredictable and inconvenient fashion."
I'm not seeing how it's unpredictable and inconvenient. It's predictable if the stack address can be leaked (via a frame pointer leak, for example). It doesn't seem that inconvenient. Instead of including the address of a gadget in the chain, include the gadget xor the leaked stack address. What's the unpredictable and inconvenient part that I'm not seeing?
You have it - if a stack address can be leaked, and you can follow the control flow to figure out the difference between the address you leaked and the address you're going to be dumping your ROP chain into, then you can just xor the gadget address with the stack address, and then do the math to xor any down-chain gadgets with the calculated stack address if the gadgets you want to use happen to have this xor instruction injected into them.
But you don't always have stack address leaks. Presently, in order to ROP you need (a) a leaked address to the space where your gadgets live and (b) the ability to write your ROP chain somewhere where the program will return into it. With this scheme, you now also need (c) the exact address where you are writing your ROP chain.
Not all info leak vulnerabilities leak arbitrary memory of the attacker's choosing. If they did, stack canaries would be pretty useless. So for those cases where a stack address leak is unavailable, this raises the bar against ROP.
I believe they're Zigbee Home Automation, not ZLL. Either that or they're improperly reporting that they support ZHA instead of ZLL, which causes issues for 3rd party bridges like Philips Hue. https://developers.meethue.com/comment/2686#comment-2686
If you find yourself starting a comment with “not to sound rude”, you might consider whether skipping the comment altogether is the best way to not be rude.