Very cool. It's interesting that they are planning to start with Vulkan support, followed by OpenGL/DX/etc.
I guess it makes sense; the RISC-V crowd might possibly skew towards early adoption over backwards-compatibility.
It would be really cool to see some implementations from places like SiFive/GigaDevice/etc. Imagine how easy driver support could be if everyone used and contributed to the same open IPs...
They can also take advantage of zink to implement opengl, at least on platforms that support mesa; and use wine's implementations of direct3d.
I expect that, in the future, gl and d3d will be implemented entirely on top of vulkan, and thus be more portable and make it easier to make GPUs and graphics drivers.
> I expect that, in the future, gl and d3d will be implemented entirely on top of vulkan
I'm not sure whether you're talking about the "official" Microsoft D3D libs here, but I very much doubt that'll ever happen. They don't like dependencies they can't control. The Excel team used to maintain their own compiler because they didn't want to be dependent on the Visual C++ team who worked in the same building.
Vulkan is not even supported on UWP or Win32 sandboxes, the ICD mechanism its drivers use from the OpenGL days is only allowed in classical Win32 mode.
Version 3.0 is the latest on iDevices and while Android can do up to 3.2, it is an optional API.
Likewise, Metal is the name of the game on iOS nowadays, and Vulkan was introduced in Android 7 as optional API, and only became mandatory in Android 10.
Vulkan also carries on the tradition of Khronos APIs, extension spaghetti, so while a device might support Vulkan, it doesn't mean it supports the Vulkan that the application actually needs.
You might be misunderstanding the US capitol outrage. It wasn't that people brandished weapons or killed each other - that's bad, and everyone wants it to stop, but it does sometimes happen.
The problem was that the Capitol was invaded while the Senate was in session, which made a bunch of lawmakers feel personally threatened. And when rich politically-connected people fear for their own safety, you'd better watch out.
Anyways, protests attack cops and cops attack protests in the US...oh, every few months? Such is life.
The revolution always devours its children. One of my favorite aphorisms is:
>Don't put your faith in revolutions. They always come around again. That's why they're called revolutions.
What happened to Georges Danton, the thunderous voice of the early French Revolution? How about Toussaint L'ouverture, avenger of the New World? Even Simon Bolivar and Manuela Saenz left behind a fractious group of nation-states.
The US is kind of an aberration in that regard, but even Ben Franklin cautioned that "the tree of liberty must be refreshed from time to time with the blood of patriots and tyrants."
The funny thing is, if you extend that metaphor to big tech companies like FAANGs, then they would be the early revolutionaries who are about to get devoured. And that metaphor would fall apart quickly, because those companies have never professed to be revolutionary harbingers of a new and improved world.
I have one question for Tesla customers who trust the company to deliver full FSD.
How do you reconcile that belief with the fact that Tesla's embedded developers did not understand the extremely simple concept of write endurance?
The NHTSA opened an investigation into premature HUD failures because they prevented the backup cameras from working. But the fact of the matter is, the company used a small partition of internal Tegra Flash to store rapidly-refreshing log data. And you are trusting these devs with your life when you enable autopilot.
You're also entrusting my life, and those of my family, to them. But we'll gloss over that, because it's expensive not to.
Tesla is organized functionally. The Infotainment Group did the Console electronics. The SW people there did GUIs and such. So yes between the electronics folks and the app folks 'somebody' didn't consider write cycles. In other Tesla groups, such as Body Controls, and Propulsion -- I can assure you those geeks know such things and plan to deal with funky hardware. The Autopilot group is again separate. There really isn't much crossover. "Systems" is unfortunately an unknown word at Tesla. You know, parts is parts.
This is interesting to know, and your comment flipped a switch in my head - I'd like to know the organizational structure of a lot of companies out there. Is this information you acquired personally? Or is there a resource out there where you can refer to the structure of different companies?
Typically the annual report will give you an org chart with the division heads for public companies. If it isn't there it will be on the website or some other publication and if you can't find it and are an investor you can always simply ask.
From there on down it takes a bit of work to get more detail, we typically spend a day on this during the run-up to a DD to verify what we receive and use a lot of googling, linked-in, and other sources to figure out who works in the company and in what role.
The GDPR has made this a bit harder. Team pages are a good source of info for lots of companies in the 10-100 people range, they sometimes list all of their employee names + titles.
I'm not aware of a single source of truth for detailed org charts, if it exists we'd be happy to buy it, it would save us a lot of time and effort.
> How do you reconcile that belief with the fact that Tesla's embedded developers did not understand the extremely simple concept of write endurance?
That sounds more like the kind of situation where the software department said "we need to have a system that has X amount of storage" and the hardware department made the hardware for it, but there was some missing communication about endurance. It's likely not the same people writing the autopilot software.
That being said, I'm not a Tesla customer, and the way autopilot is deployed and marketed makes me very uneasy.
Well it still speaks volumes about internal culture. Everyone on the team should know they are developing a safety-critical system/component. Yet, the Bob from software can write a sloppy spec and Alice from hardware can not care about the spec being sloppy. It is entirely baffling.
>Yet, the Bob from software can write a sloppy spec and Alice from hardware can not care about the spec being sloppy.
Ehhh, you have not been long in industry, have you? :)
And that's why you get everything in writing and doubly signed off from all parties involved. Even telling people directly, to their face, with witnesses, does not work. Checklists for the departments does however.
Hmm I don't want to defend Tesla, but I do want to push back on this a bit!
Facebook made "move fast and break things" famous, which was Zuckerberg's way of presenting a tradeoff. Everyone company says they want to move fast, but Zuckerberg made it clear that the company should care more about velocity than stability.
I don't believe that's Tesla's attitude. Rather, I think their attitude is more "move fast and ignore regulations". It's not that things won't break, but rather that the tradeoff Tesla is making is around regulation rather than things breaking.
The other side of the coin as that they hobble 5-20 years behind tech possibilities. If you want to push boundaries, you sometimes have little choice.
Self-driving cars in the next 10 years have been a serious possibility for the last decade or so, yet governments and insurers don't have a policy ready and won't until a couple of years after the first self-driving cars are available.
Tesla, Google et all are not little startups but huge corps with plenty of cash and the ear of any politician or CEO. If they can't get a policy enacted, maybe there's a reason for it.
Pushing boundaries is fine when there are no life-threatening implications.
Self driving is all about convenience and costs[1] and as such it's not necessary, nor is it advisable, to inflict the bleeding edge on the general public. Waymo's geofenced approach is less bad than Tesla's, and it's something that regulators can readily work with also.
1. But teh safeties!1! No. Just no. ADASes (advanced driver assistance systems), particularly autonomous emergency braking, remove the safety argument for self driving. With ADASes you have 95% or more of the (asserted) safety of self driving, and ADASes are available today, on a large and increasing range of cars. There are even retrofit kits.
There is nothing like self driving in current regulations, much less 10-15 years ago when people seriously started working on it. So, in your views, even starting working on this stuff should be criminal offense?
Also I think it was just to cover the backs of FB engineers. Like you implement a new feature and you are afraid you get scolded because it broke something. You know you are covered. So you dare to change things. Actually was there even a handful of cases where things broke? (And I am sure FB will get rid of the engineer who breaks too many things)
So, the culture was "it's okay to break things as long as you're moving fast". I don't think Telsa explicitly would say "it's okay to break things" to their engineers, but I do think they'd say "it's okay to ignore regulations".
In the end, they may have the same results, however it's all about what employees know they're safe getting away with.
Tesla might actually say: "Everybody, we need to push end of quarter sales. You gotta release the FSD as it is. App team, you gotta implement some butt purchase button for FSD that has no undo. Thanks."
A lot of this also stems from a culture of quarterly earnings reports and idiotic "fiduciary" duty to some shareholders instead of the primary duty being to customers and humanity.
The incentives are fundamentally defined in the wrong way, and the system has just optimized itself for the those incentives.
There is no challenge reconciling imperfect FSD with high trust of that FSD.
When I decided to play the game of risk minimisation, I sold my car. Minimising risk isn't the most important goal of drivers, almost by definition. Cars are not safe in any objective sense. They are tools of convenience.
A fun hypothetical, you and a good friend get tested for spatial intelligence and it turns out there was a big difference in your favour - how big does the difference need to be before you tell your friend you are no longer comfortable letting them drive when you are in the car?
While spatial awareness is important during driving,I believe being focused on driving is even more so.
When driving the tight streets of old European cities with pedestrians jumping out everywhere, I usually watch for hints like too tall cars parked on the sidewalk and potentially hiding pedestrians planning to cross the street, and move my foot from the gas to hovering above the brake pedal. And a million other things like that, mostly by paying close attention to driving.
Sure, I believe my spatial awareness is also great, but that helps me to parallel park in fewer back-and-forths or to remember a way to a place I've been to once six months ago through a maze of one way streets. But it does not help me reduce the chances of an impactful collision (sure, I might ding a car on the parking lot or not because of it, but nobody is going to get hurt because of that).
You are right that cars are not safe, but for some part, you've got control of the risk yourself. I also watch for hints a car will swerve in front of me, and I am sure I've helped avoid 100s of traffic accidents by being focused on the whole driving task. And other drivers have helped avoid traffic accidents that I would have caused in probably a dozen cases too. I think I am an above average driver simply because of that ratio.
You run similar risks when you board a public bus without knowing how the driver feels that day, and how focused they generally are.
> You are right that cars are not safe, but for some part, you've got control of the risk yourself.
I don't want to be in control of the risk, I'm a bad driver. Haven't owned a car for some years. Still drive on occasion when I need to with a hire car.
I want a computer that is better at driving than I am to do it. It is easy for me to see why perfect is the enemy of good on this issue.
You don't want to share a road with me when you could share it with a Tesla FSD.
>You don't want to share a road with me when you could share it with a Tesla FSD.
This might be irrational, but I'd rather be killed by a human than killed by a computer made by a company that's run by a gung-ho serial bullshitter. That would somehow suck worse.
Bugs in humans let them do that too: "The US NHTSA estimates 16,000 accidents per year in USA, when drivers intend to apply the brake but mistakenly apply the accelerator."
Or: look-but-failed-to-see-errors, which are an "interesting" cause of accidents. When I took my motorcycle driver's test, my driving instructor sometimes warned me that I needed to make movements in a particular way. He claimed that even though I would make eye contact with a car driver, they may look-but-not-see-me. His reasoning was, as a motorcycle rider, I'm horizontal/upright when a car driver may be looking for something vertical (another car).
Riding a motorcycle is a tough one for car drivers, and not just because of the issue you mention: bikes can accelerate and brake much more rapidly due to their lower mass, and inattentive drivers can easily be caught by that. Them appearing where it shouldn't be possible for a car to show up also amplifies the issue (you don't need to look over your shoulders in a single lane street, but bikes easily show up there).
To be honest, I'd trust software even less if I was a bike rider riding in a European (or Chinese, Phillipine...) city, but that's just me :)
You mention focused driving, but here's a cool idea. Your subconscious which actually handles most of ur behavior and decision making and nuanced calculations gradually learns from your conscious. When you focus on things, you gradually train your subconscious to mirror that behavior and do it autonomously.
This is demonstrable by reflecting on new things you learn versus old things. Old things like walking you barely put any conscious effort in, yet once you reach a certain age the daily obstacle course that is life, which is full of tripping hazards becomes effortless to avoid ensnaring ur foot on and succumbing to a sudden tumble. But if you were to try to roller blade for the first time, suddenly you have to put massive conscious strain and focus on every movement just to avoid falling on something so simple as a slight texture change on a surface.
Also interesting thought on (conscious) spatial awareness: Here's a question is your conscious aware of things first or is your subconscious aware first? When you conscious becomes aware how sure are yah that it's not your subconscious first alerting your conscious beforehand? These are rhetorical questions which psychologists and neuroscientists already have insights about :).
Life is dangerous, but many of the dangers are predictable and the brain is adept growing to adjust to that predictability AND at learning to recognize indicators for unpredictable dangers(humans receive anxiety in these moments). In those latter situations Intelligence and consciousness is needed. Dangers that are predictable can be learned to be subconsciously handled without much worry & with much practice + experience.
Tesla autopilot is a computerized subconscious that's consciously trained by all the tesla drivers.
I strongly suspect that we'll never have level 5 autopilot with or without lidar sensors unless the computers get a human adaptable intelligence module OR some convention simplifies the environment such that new unpredictable dangers can be minimized to a miniscule and acceptable failure rate.
I think people in this debate are focusing on the wrong issues.
You say how we subconsciously handle things like obstacles during walking, but here I am at my 38 years of age, tripping on uneven sidewalk where there's a sudden unnoticeable drop in the level of a couple cm (an inch): the same feeling when you go down the stairs in dark, and forget that there is one extra step.
I agree we get subconsciously trained (here, my brain is expecting a perfectly flat sidewalk), but when I say focused driving, I am mostly thinking of *not-doing-anything-else*: to an extemt that I also keep my phone calls short (or reject them) even with the bluetooth handsfree system built into my car with steering wheel commands.
The thing is that a truck's trunk opening in front of you and things starting to fall out on a highway at 130kmph (~80mph) is very hard to train for, but all four of us car drivers that were right behind when it happened did manage to avoid it without much drama or risk to themselves or each other. What self driving tech today would you trust to achieve the same today? Sometimes you don't care about averages, because they are skewed by drunks or stupid people showing off on public roads.
And stats being by miles covered is generally useless: if it was accidents per number-of-performed-manouvres, it'd be useful. Getting on an empty highway and doing 100 miles is pretty simple compared to doing 2 miles in a congested city centre.
The hud as you call it is not a safety critical part of the car in Teslas. You can reboot it while driving without effecting the car. The self driving computer is separate and has full redundancy to the point of having two processors running redundant code. There is a reason Tesla’s are consistently rated as the safest cars on the road with low probability of being involved in an accident and lowest probability of injury when an accident does happen.
2 Processors? What happens with majority vote if they disagree? (or maybe they wanted to avoid a 'minority report' situation :-)). But honestly do you know what they do? Although since it is not flying, probably some red indicator will light up. And maybe a stopping maneuver.
"Each chip makes its own assessment of what the car should do next. The computer compares the two assessments, and if the chips agree, the car takes the action. If the chips disagree, the car just throws away that frame of video data and tries again, Venkataramanan said."
Fail safety doesn’t mean anything, if the decisions it makes is bad, like thinking a plastic bag is a solid object on the road, or simply forgetting where the lane is over a distance and swerving into oncoming traffic..
I worry about them, but not more than people abusing AP, they're all in the same boat.
The people texting and driving are idiots, distracted idiots, but they have no misconceptions about if their car will save them if they take a nap.
Elon's made comments about "your hands are just there for regulatory reasons" and overpromised for years so now people abuse it until it's just as dangerous if not more dangerous than distracted driving (stuff like intentionally sleeping or using a laptop full time)
Other manufacturers are coming out with features that protect me from texting drivers without generating a new breed of ultra-distracted drivers like those who are falling for Elon's act.
Now a base model Corolla, pretty much the automotive equivalent of a yardstick, will steer people back into their lane and warn drowsy drivers that the car is intervening too much.
A Tesla can't even do the latter.
-
One day we're going to look back and wonder why we allowed things like automatic steering without fully fleshed FSD.
I mean the driver is actually safer if you only intervene when something goes wrong. They're forced to be attentive, yet in all situations where they fail to be attentive and AP would have saved them... it does save them. And tells them to get their head out of their backside.
If AP did that every person it has saved would have still been saved, but a few people it got killed would still be here today.
Same here. However, I am assuming you have decent sight, so you can at least protect yourself in certain situation. Me is blind, and I am getting increasingly weary about the future as a pedestrian.
As a kid, one of my biggest fears were automatic doors. I sort of imagined they would close on me and try to kill me. I am afraid this horror is going to be true at some point of my life. Automation is going to kill me one day.
> How do you reconcile that belief with the fact that Tesla's embedded developers did not understand the extremely simple concept of write endurance?
Easy: compartmentalization of knowledge. Most software developers I have met have no idea about the storage stuff under their application, they trust the OS and the hardware people to deal with it. I mean, who can blame them in the age of AWS or actual servers where companies simply throw money at any problem and hardware is rotated for something new before write endurance ever becomes an issue And the hardware people probably knew that the OS people would run Linux, but didn't expect logfile spam.
Please note that you are asking this question at the tail end of pandemic where a significant portion of the country decided it was preferable to "just let some old people die" than to lockdown or even wear masks.
Those people will twist themselves into giving you a PC answer but the truth is they're willing to crack a few eggs in order get FSD today. They'll tell you no one else is even trying and in the long run FSD will save more lives and Musk should be praised for having the gumption to get the ball rolling.
> in the long run FSD will save more lives and Musk should be praised for having the gumption to get the ball rolling.
In the long run, for the historical perspective, this is a very plausible outcome. It's happened before (any major construction projects prior to the 1950's, anything involving Thomas Edison, most large damming projects), where a historical event is tied to a bunch of dead innocents, but history books praise the vision and determination of the ones in charge to not give up just because a few measly blue collars kicked the bucket early.
Where the difference in both this and the grand parent post is about choice.
If you fear the pandemic and what to lock yourself up in isolation we should as much as possible allow that. And if you want to work on very dangerous projects for better rewards you spud be able to.
With autonomous cars the choice of risk may not be so easy.
Arguing about what real choice you have is overly pedantic and we should rather concentrate on the principles for the right out come.
Well it's up to the paper, but personally, I support papers re-evaluating their positions on this sort of thing.
You'll still be able to find the old article on archive.org et al, but what a paper publishes today should reflect what it stands for today. If the paper was wrong or unfair, what is wrong with modifying or removing the coverage which is served today? Is that really worse than printing a "correction" paragraph at the end of the original article?
Maybe publishers could implement a sort of "timeline" feature which shows how the organization's understanding of an event changed over time. But today, I can't see anything wrong with a newspaper accepting petitions to modify outdated coverage.
They don't pay them to work 40+hrs/week, but often they expect them to work 50+hrs/week. From what I've seen in Sweden at least, but at least the pay is okay there.
Well in the US, postgrad education is a genuinely interesting proposition.
* Very low pay.
* Good benefits in a nation with poor safety nets.
* Tuition waivers along the lines of $10-100K/year.
* When the Dr. says jump, you ask how high.
If you view education as an investment, it isn't necessarily bad compared to an ordinary job. But it's kind of like a FAANG company; your experience depends on who you report to.
Sure, the low pay is commensurate with...uh...some sort of ephemeral opportunities in the future.
But seriously, you're right. Grad students do grunt work, that's how it goes. And if an academic Python2 library is widely-used, porting it is important grunt work.
Surely, no serious researcher would let an important tool rot, right?
The problem is that because of the way the incentives are set up, it is not important to anyone involved.
What is important to the grad students is to produce research papers and to fulfill their mandatory obligations (teaching, project deliverables). And most grad students, even in CS, are not professional software developers anyway. Good luck convincing capable grad student candidates to join your group to do boring software maintenance for horrible pay and no job security.
What is important to the professors, who decide what the grad students will work on, is again to produce research papers, fulfill their mandatory obligations (teaching, project deliverables) and to continually file for grants. Spending grad student time on porting and maintaining libraries does not help with that. In the worst case your grad student is spending their time maintaining a tool that a competing group's grad students are using to churn out papers, beating you to publications and grants.
What is important for the funding agencies is flashy new research in the current hot topics. I never saw a funding agency that would even consider paying a grad student, let alone a full software engineer salary, to port an academic tool from Python2 to Python3 or do all the other maintenance you need to do on production codebases---nor do most universities even have salary classes and positions for that.
As a result, in the many years I spent in academia, I saw many important research tools rot (both software and large hardware testbeds). The solution is not grad students, but to have fully paid software engineer positions in academia. But realistically that is not going to happen.
I was getting a tour of a lab from a grad student, and I was told a desk was full of 3.5" floppies with data from old research. I said "Wait, what? Old floppies won't retain data indefinitely!" and got shushed -- she didn't want to wind up responsible for trying to do data recovery on hundreds of floppy disks, which would do jack for getting her to her Ph.D.
Rot is a very big part of what happens to a lot of information.
From limited exposure, these codebases are often in truly awful shape. They've endured years or decades of having been hacked up just enough to finish someone's thesis or dissertation with no concern at all towards maintenance.
It should be no surprise that such a poorly engineered process produces awful results.
They might just dislike H2O2. But it is useful as a disinfectant, and it can help get rid of earwax if you dilute it. Do we really need to ban the stuff? Peoples' tastes vary, but personally, I'm not a huge fan of earwax.
Per multiple doctors and an audiologist the best way to clean your ears easily is to just use warm water while showering and direct it down your ear canal with moderate water pressure. I used an irrigator for years, just with tap water, but the possibility of too much pressure always worried me. I struggle(d) with ear wax build-up in my teens but have been wax free for 2 decades now, so long as I am diligent about cleaning my ears.
Maybe, but try to get into the mind of someone who installs a minimalist OS. They might think that Linux is too heavy and an RTOS is too constrained, right?
That kind of perspective probably wants to be fairly close to the metal, and virtualization is a sizable abstraction layer.
I assume that "mm" is a typo in the article, but that cost was for a >15-year-old process node. 65nm for $100M might be a stretch, if that is accurate.
That 300mm is a different measurement. It's not a typo and they're refering to the size of the wafers, in this case 300mm diameter (very close to 12 inches).
Yes. In addition, wafer size is another important metric of fab capabilities aside from process node, since it roughly translates into production rate.
This is coming in late, but the short answer is "I have no idea".
The slightly longer answer is "I don't know. But once you have the space and equipment and you're sure you can handle everything safely you still need to staff this and build up some institutional knowledge." If you're thinking this far, you should ask why 65nm? Because it's a process node you know a certain microprocessor was built on? It's just a benchmark that's useful for working out your sense of what it costs to do these from scratch?
The cheapest way to get the equipment would be buying out a fab that's shutting down. When I was in undergrad, I got to play in a 5 micron (5000nm) fab on campus because the previous owner of all the equipment gifted it to the university instead of scrapping it. My gut sense is that there's nothing at 65nm that's unprofitable yet, so you're not getting cheap stuff.
(a 6 micron node was commercialized in 1974, this university lab was opened in 1993. 65nm was commercialized in 2005. While Moore's Law held, more or less, through much of 1974-2005, I think we can say that the difficulties and necessary capital investments increased super-linearly over that time. And 20 year old manufacturing tech seems much more useful now than it did in '93. The STM32 family of microcontrollers, a very strong line, is spread over the 132-40nm range of process nodes.)
I guess it makes sense; the RISC-V crowd might possibly skew towards early adoption over backwards-compatibility.
It would be really cool to see some implementations from places like SiFive/GigaDevice/etc. Imagine how easy driver support could be if everyone used and contributed to the same open IPs...