Not a Java implementation, but the original game was written in Java. Later, Microsoft bought Minecraft and rewrote it (Bedrock edition) which runs on Xbox, tablets, etc. But, the community writes mods in Java.
Now both exist and get roughly the same feature set now, but the Java version remains popular given the vast variety of mods and servers.
> Minecraft: Java Edition runs on Windows, Mac, and Linux; Minecraft: Bedrock Edition runs on Windows.
(From their own website. Bedrock might work with wine etc.)
For a game as popular as Minecraft, where every year a fresh cohort of young players reaches an age suitable for playing it, it would be madness to discard Linux and Mac users and possibly push the modding community to some other game.
There is an open-source launcher to run Bedrock on Mac and Linux, and it runs well. Bedrock, however, still isn't as popular because servers and mods are more of an afterthought, so not a lot of effort has been put into making it developer-friendly.
As I recall the C++ reimplementation of Minecraft predates the Microsoft sale. Unless they did a complete rewrite I don't know about, Bedrock is distantly based on the old mobile/console version of Minecraft.
> Free forever for teams up to 5. Unlimited search, unlimited history.
I understand the strategic value of offering unlimited features to differentiate from competitors like Slack, might drive some amount of anxiety. Buyers may question long-term sustainability or fear undisclosed "shadow" caps.
Since engineering limits are inevitable to prevent abuse (especially on free accounts), it might be better to set specific, generous expectations upfront. For example, 2 years of freeform search plus unlimited "tagged" (i.e. Decision Inbox) search. This avoids the skepticism that comes with promising "no limits" forever. It also avoids the trap of needing to announce a change later with predictably negative reactions.
If you do want to offer unlimited, then planning ahead with hard-to-hit-unless-you're-trying messages/hr limits might help you tame growth and avoid abuse. My initial thought when seeing unlimited anything is "I could write a filesystem on top of that" - especially if you allow attachments. :P
Most people say their number one complaint is limited history. But then you offer that, and they realize it was not such a big deal. Slack still wins on so many levels that I don't see anyone willing to move any time soon.
Just sell extra storage at reasonable price. That's the most transparent system you can get.
Some users will never hit more than few GBs as it will be near only text. Other people will share 100MB video clips daily or use it as easy way to transfer files betweeen users in company
Maybe have option to expire attachements at separate timer or ability to set a cap where oldest files get removed if it is passed for cost-control-concious companies
Your costs will change and shift over time. Personally, I don't trust anything that says "Free Forever" or "Unlimited". Give a real limit and figure out the transition. "Free now, and no plans to change, but if it does, we will give you one year to transition" is much more confidence building then "Free forever".
There are, and often times they're stuck in a loop of presenting decks and status, writing proposals rather than doing this kind of research.
That said, interpreting user feedback is a multi-role job. PMs, UX, and Eng should be doing so. Everyone has their strengths.
One of the most interesting things I've had a chance to be a part of is watching UX studies. They take a mock (or an alpha version) and put it in front of an external volunteer and let them work through it. Usually PM, UX, and Eng are watching the stream and taking notes.
This is often (though not always) blanket statement.
Logs are always generated, and logs include some amount of data about the user, if only environmental.
It's quite plausible that the spellchecker does not store your actual user data, but information about the request, or error logging includes more UGC than intended.
Note: I don't have any insider knowledge about their spellcheck API, but I've worked on similar systems which have similar language for little more than basic request logging.
> Preliminary information about the accident remains scarce, though two people familiar with the aircraft tell The Air Current that the aircraft in question, N704AL, had presented spurious indications of pressurization issues during two instances on January 4. The first intermittent warning light appeared during taxi-in following a previous flight, which prompted the airline to remove the aircraft from extended range operations (ETOPS) per maintenance rules. The light appeared again later the same day in flight, the people said.
No idea about the accuracy of the site. And it seems like they have some script that prevents text highlighting for whatever reason (turn off Javascript).
Well, that's an interesting thing. During taxi-in, the cabin altitude should be the ground altitude; outflow valves open at touchdown.
Hard to understand how an an incipient failure could manifest then (e.g. from increased leakage).
Of course, there's warning lights for excessive cabin pressure, etc, too... which would point to a different theory of the problem than a structural manufacturing problem.
Jon Ostower is one of the best aviation reporters in the business and the Air Current is a site many professionals and executives in the industry trust.
It's too bad that asking "source?" comes across as hostile unless clarified to be otherwise. Maybe the internet should adopt something similar to the "/s" tag that signals that sentiment.
Asking for any sort of clarifying information inevitably leads to argumentation on Reddit. It’s like we’ve all learned to be so polite that the truth barely matters (I’m exaggerating of course).
You'd think so, but for datacenter workloads it's absolutely common, especially if you're just scheduling a bunch of containers together. Computation also doesn't happen in a vacuum, unless you're doing some fairly trivial processing you're likely loading quite a bit of memory, perhaps many multiples of what your business logic is actually doing.
It's also not as easy as GB/s/core, since cores aren't entirely uniform, and data access may be across core complexes.
I'm not sure what you mean by datacenter workloads.
The work I do could be called data science and data engineering. Outside some fairly trivial (or highly optimized) sequential processing, the CPU just isn't fast enough to saturate memory bandwidth. For anything more complex, the data you want to load is either in cache (and bandwidth doesn't matter) or it isn't (and you probably care more about latency).
I had these two dual-18-core xeon web servers with seemingly identical hardware and software setup but one was doing 1100 req/s and the other 500-600.
After some digging, I've realized that one had 8x8GB ram modules and the slower one had 2x32GB.
I did some benchmarking then and found that it really depends on the workload. The www app was 50% slower. Memcache 400% slower. Blender 5% slower. File compression 20%. Most single-threaded tasks no difference.
The takeaway was that workloads want some bandwidth per core, and shoving more cores into servers doesn't increase performance once you hit memory bandwidth limits.
It's usually bottlenecked by memory latency, not bandwidth. People talk about bandwidth, because it's a simple number that keeps growing over time. Latency stays at ~100 ns, because DRAM is not getting any faster. Bandwidth can become a real constraint if your single-threaded code is processing more than a couple of gigabytes per second. But it usually takes a lot of micro-optimization to do anything meaningful at such speeds.
Except it's also trivial to buy or produce tables of pre-hashed emails, so this cloak of "oh we don't know who you are, it's a hash!" is usually just lipservice.
They're not literally passing around the hash. Holders of hash(email) <=> browser cookie associations are heavily incentivized for both regulatory and also competitive reasons to not blast that information around the internet -- or even to let direct partners A & B identify overlaps without their being in the middle.
When passing identifiers, there's generally some combination of lookup tables, per-distribution salted hashes, or encryption happening to make reverse mapping as difficult as possible.
This is one of the things that drives me nuts when hardcore privacy advocates start wading into browser feature discussions and complaining about things being used to fingerprint users.
I mean, can eye-tracking in a WebXR session be used to identify users? Yes, clearly that is a possibility. But will the addition of eye-tracking increase the identifiability of users? No, not in the least, because users are already identifiable by means that involve core browser features.
But frequently, the "privacy advocates" win and we're left with a web platform that has a lot of weird, missing functionally in comparison to native apps, pushing developers to either compromise on functionality or develop a native app. Compromising is bad for users. And developing a native app could can be bad for the developer, if one considers their existing investment in web technologies. Or both the developer and users, when one considers the vig that app stores charge, or the editorial control that app stores enforce over socially-controversial-yet-not-actually-illegal topics. Or just users when one considers the fact that the app stores just hand app developers a user identity without even making them work for it with fingerprinting.
And often, the voices that are loudest in defence of "privacy" are browser developers that also just so happen to be employed by said app store vendors.
I think the idea is that you can generate the MD5 hash of all, say 8 letter, @gmail.com addresses trivially and since the email hashes used for targeting don’t have a salt, it’s a one time “expense” to build the reverse lookup table
Android also reaps permissions that haven't been used recently. In the case of location, Android prompts for renewal even if it has been used recently.
Is it though? The new verified system was rolled out really poorly.
There should have been a migration path from legacy to new verified, but instead they just unverified everyone (including obviously government accounts that under the new rule should retain a grey check).
I must be using a different site from you. Letting people pay to get boosted has turned the top of every thread into a hive of emoji-pasting, cruel, low-effort cretins.
Now both exist and get roughly the same feature set now, but the Java version remains popular given the vast variety of mods and servers.
reply