For what it's worth, I am a big fan of ergonomic keyboards and quite recently a company released my dream keyboard. It may certainly not be the right choice for everyone, but I highly recommend giving it a try.
A bit of a loaded question, but how do you feel about xbox f-in up major titles like Halo Infinite and Phil Spencer just saying "yep, we are sorry, that's on me". When a) there doesn't seem to be any change in quality assurance (see Redfall) and b) the devs still take the hit (see 343).
To me it's incomprehensible how they managed to f-up Halo Infinite, then just stand there in silence and then say, yeah we are sorry for not having a lot of interesting titles on Xbox, but here's what's coming next year.
If I were a game dev at Microsoft, I'd be furious!
I think telemetry collection is a symptom of deeper organizational issues.
For instance, I've never worked with a competent release manager who said "we need more field telemetry!"
Instead, the good ones invariably want improved data mining of the bugtracker, and want to increase the percentage of regression bugs that are caught in automated testing. They also generally want to increase the percentage of automated test failures that are root-caused.
> I can speak as a GNOME developer—though not on behalf of the GNOME project as a community—and say: GNOME has not been “fine” without telemetry. It’s really, really hard to get actionable feedback out of users, especially in the free and open source software community, because the typical feedback is either “don’t change anything ever” or it comes with strings attached. Figuring out how people use the system, and integrate that information in the design, development, and testing loop is extremely hard without metrics of some form. Even understanding whether or not a class of optimisations can be enabled without breaking the machines of a certain amount of users is basically impossible: you can’t do a user survey for that.
The problem being that inevitably the 'improvement' gets measured through the same telemetry figures which are being optimised, so of course it's percieved by developers has helping them improve things.
To add to the point: Alphabet has probably most data than any other company (except FB I suppose) and they still can't release a good product that people will actually use no matter how much data they have.
There's this anecdote (which might have been made up entirely to make a point):
Restaurant management wanted to compare different soup offerings by counting orders from each soup to determine which ones were more popular. They have selected the most popular two offerings, the rest were scrapped in order to safe money on ingredients. Soon after, not only did the order numbers of those two soup offerings drop, the total number of soup orders dropped. How come? Well, maybe nobody has asked the customers if the offered soup was tasty at all. A quick survey revealed that customers make the popular choice, find out it's crap, and then do not ever order soup again in that restaurant, or in rarer cases give the other one a shot. It turned out, the most popular offer was basically cheap crap nobody wanted to eat, and when there's nothing else, they keep ordering the same, or never visit that restaurant again.
Telemetry does not tell anything about user preferences. Who ever is selling you that idea, does not either.
Alphabet markets two linux-based operating systems for consumer devices that are extremely popular. Telemetry from the field makes android and chromeos better operating systems.
I don't think the fact that Google has two popular OSes with telemetry means that telemetry makes them better OSes. For Android, their only real competitor is iOS. Their big advantage there is cost and the reason there isn't really another competitor is the difficulty of creating an OS which requires a number of resources. For ChromeOS, the situation is similar with regards to the time to create an OS. I think there, their main competitors are small Linux distributions, but, in addition to manpower, Google has the money to procure and sell laptops with their software already on them at scale. So, I'm not convinced telemetry is actually creating a better product, their products happen to have telemetry.
The is HN so you are allowed to hold forth out of pure ignorance, if you want to. Android-wide and ChromeOS-wide profiling produce binaries including the Linux kernel that are peak-optimized for actual conditions in the field and real use cases. They ship the only profile-optimized Linux kernels you can get from any distribution. It is a demonstrably better product through telemetry.
Got a link to any information on this, especially the collection of the profile information from user's devices? The only reference I can see to PGO on android is the support they have for a more traditional flow (create special instrumented binary, run a 'representative' workload or two, get profile data for PGO). It would be especially interesting if there's any info on how much of a performance improvement this yielded.
For instance, last time I checked them out (in MSVC) it was perfectly fine to expose an un-exported type via an exported function. Apparently this is by design.
The problems that paper points out are certainly real, but:
- They're not specific to modules (the codegen dependency issue is there even right now)
- They're not specific to C++ (Java faces a similar problem)
- They all have solutions: either you have to tell the build system what the dependencies are directly, or your compiler has to be able to build the dependencies too. Bazel and gradle already handle these, just in a different context than C++20 modules.
So while C++20 modules are harder to use, and they very well might not get adopted, it's not really a design problem, nor an unsolved one - it's just a fundamental impedance mismatch that occurs in any language where the dependencies are specified in the same file that's being compiled.
i.e. It's a problem for the build system to solve, and ones do exist that have solved other forms of it in the past.
Windows 7 does have a delete confirmation for Undo Copy. However, either Windows 8 or 10 removed that confirmation. More reason why people missed Windows 7.
Well, my take is that the language isn't that great, but I really really dislike the ecosystem.
The sheer amount of code you get, even when using seemingly lightweight dependencies is just such a dumpsterfire.
I once ran cloc on a react getting started example. The example, including its dependencies, contained more than 3 times the lines of code than a 3D game engine (targetting PS4/5) + game logic + dependencies for a (non-indie) title I was porting around that time.
Honestly I find JavaScript better than, for example, Python on this front. JavaScript is pretty good at splitting up functions into individual packages and even better at using tree shaking to make sure you only ship the code you use. Many other languages have single huge packages that contain every single function vaguely related to a single area and no easy way to grab just one function. Also many language 'hide' most of their 3rd-party LoC inside pre-compiled .so files, rather than having it show up in the source tree.
In one case I has a 150ish LoC Python command line tool that I wanted to package up and distribute, and it came in at over 150 MB because I need a single function from sk-image, which contain 100s of image processing function plus pulls in all of both scipy and numpy as dependencies.
How many lines of code were from devDependencies (i.e., for use while developing) and how many ended up in the final production bundle actually intended to be served to the client?
https://eu.perixx.com/products/perixx-ergo-mechanical-keyboa...