Hacker Newsnew | past | comments | ask | show | jobs | submit | efuquen's commentslogin

And I would say the deficiencies in Profiles and the fact Safe C++ was killed is the technical decisions reflecting the culture problem.


> The folks that taught the author should hang their head in shame that their student is producing such rubbish.

This is unnecessary and rude, you should hang your head in shame for that. I wish some people in this community weren't so reactionary and would engage with empathy instead of trying to personally roast people as soon as they don't agree with something.

Someone can tell a story on the internet, it doesn't have to be some rigorous experiment or proof.


I completely disagree.

Computer Science has a massive ethics crisis. Uncritical adulation and a total lack of accountability or consequence is part of that.

There is a massive misallocation of capital which is burning opportunity for our society. Users are getting terrible experiences and systems because people read this sort of thing and believe it. Trust in our technology is eroded and this has consequences for actual people. We have abandoned the standards that protected people and you have the view that these standards, or a shadow of them even, are unnecessary?

Someone has taught a generation that this is all ok, it isn't.


I don't disagree with some of the points are making here, but the main point of my own comment is your last sentence from above was mean-spirited and unnecessary.

You want to have a conversation about quality and ethics in computing and how this post can be pushing a narrative that is not in line with your views on this, I think that is worthwhile to have. But personal denigration of someone else isn't necessary in doing that.


In a previous blog post they basically said they will never make a Go 2, and also addressed a lot of things about compatibility:

https://go.dev/blog/compat

In particular they said:

> The end of the document warns, “[It] is impossible to guarantee that no future change will break any program.” Then it lays out a number of reasons why programs might still break.

> For example, it makes sense that if your program depends on a buggy behavior and we fix the bug, your program will break. But we try very hard to break as little as possible and keep Go boring.


> In a previous blog post they basically said they will never make a Go 2

No, they didn't say that, they said it wouldn't be backwards-incompatible with Go 1. Relevant quote:

> [...] when should we expect the Go 2 specification that breaks old Go 1 programs?

> The answer is never. Go 2, in the sense of breaking with the past and no longer compiling old programs, is never going to happen. Go 2 in the sense of being the major revision of Go 1 we started toward in 2017 has already happened.


Nice visual tool but wildly optimistic. Walk times to and from stations and during transfers can eat up huge amounts of time in a commute, and that assumes trains are running well. Discounting walking to station times which clearly can't be taken into account in this tool, the transfer times between trains are also not taken into account. Transferring over from any train that is taking you up the west side to one that takes you up the east side (or vice versa) of Manhattan takes up a lot of extra time but the maps treats them as if it doesn't matter what side of the island your original train will take you.


Transfers take a very relevant amount of time, even during rush hour when the trains are running frequently. Aside from waiting on the train, which does not keep a strict schedule and can get delayed for many reasons there is the walking time to the other train, which can be a decent amount even in locations where you don't have to exit the subway to transfer. Factoring a minimum 5-10 minutes extra time per transfer is a safe bet.


What does the reason the police are doing it for matter? Isn't either action an egregious violation of privacy? Sometimes society has allowed these things for emergency reasons, but stop and frisk was a program that was instituted for many many years. I can't understand how you can view the forced cell phone search as a clear violation of rights but defend stop and frisk, other than one is happening in evil Russia and the other happened in the US.

People talk about stop and frisk in such an abstract sense, because it happened in poor minority neighborhoods, so most people talking about didn't have to go through. Imagine strangers being able to legally frisk your whole body for no apparent reason. It is such a huge violation of personal space and privacy and it's so demeaning, especially when you know it's being specifically targeted at your community.


Stop and frisking for guns is not getting into a thought crime type situation where what you say, are writing matters.

Asking to reviews your phone is.

One is a much more serious invasion of privacy in my view.

This is clear generally. We ALREADY walk through metal detectors at airports. We walk through body scanners. We do NOT expect to have to turn over our phones (unless going through a country border perhaps and if I expected it I'd just have a blank phone for that).

So we have already made the distinction here and it's not unreasonable (even if you don't agree that removing an assault weapon from a felon with a restraining order is different from going through someone's phone messages).


> You cannot compare these two levels of violence.

The comparison is between two things, the police forcing you to give them access your phone for no explicit reason, or the police forcing you to allow them to physically and invasively search your whole body for no explicit reason. You are conflating that comparison with a bunch of other things happening external to the specific actions the police are allowed to do.

The reasons why both are happening doesn't mean you can't objectively compare whether what the police are doing is acceptable outside of the larger situation that is causing them to do it. The question is if one seems unacceptable, shouldn't the other be too? If you want to talk about levels of violence between these two, stop and frisk certainly seems to be something closer to approaching physical violence, or at least the greater potential for it, then searching your phone.


You CAN compare oranges to apples.

Your argument is completely valid except there is a missing [postmodern?] point at a meta level: you are changing the focus of a terrible event that should be solved at a global scale to an event that could be handled at a local scale, AND in US it should be much simpler than in Russia.

In US you have a lot of ways you can do that and I am always amazed that the US democracy is failing at a basic level when I compare with righteous US people who achieved amazing stuff in the US past. If you don't show your mobile founds you can protect you by the law.

In Russia there are much fewer options and civilians are risking their life at an amazing level. Law does not exist.


So how can the people who are targeted - mostly minorities - use “Democracy” when the majority either doesn’t care about the mistreatment of minorities or even cheerlead it? Not to mention that politicians are afraid of blowback from police unions and the majority who support them since people don’t get stopped for “driving while White”.


I don't have the answer but I think there are enough creative people in US to find it and advance the state of the art in politics. Event if it is not in US it can happen anywhere.

I also think that the statement "software is eating the world" have not touch politics enough but will happen. Not talking about FB and social media in general.


> Yet somehow this is fairly obscure knowledge unless you're into serious game programming or a similar field.

Because the impact in optimizing hardware like that can be not so important in many applications. Getting the absolute most out of your hardware is very clearly important in game programming, but web apps where scale being served is not huge (vast majority)? Not so much. And in this context developer time is more valuable when you can throw hardware at the problem for less.

Traditional game programming you had to run on the hardware people used to play, you are constrained by the client's abilities. Cloud gaming might(?) be changing some of that, but GPUs are super expensive too compared to the rest of the computing hardware. Even in that case the amounts of data you are pushing you need to be efficient within the context of the GPU, my feeling is it's not easily horizontally scaled.


IMO we are only scratching the surface of cloud gaming so far. Right now it’s pretty much exclusively lift-and-shift, hosted versions of the same game, in many cases running on consumer GPUs. Cloud gaming allows for the development of cloud-native games that are so resource intensive (potentially architected so that more game state is shared across users) that they would not possible to implement on consumer hardware. They could also use new types of GPUs that are designed for more multi tenant use cases. We could even see ASICS developed for individual games!

I think the biggest challenge is that designing these new types of games is going to be extremely hard. Very few people are actually able to design performance intensive applications from the ground up outside of well-scoped paradigms (at least web servers, databases, and desktop games have a lot of prior art and existing tools). Cloud native games have almost no prior art and almost limitless possibilities for how they could be designed and implemented, including as I mentioned even novel hardware.


I've thought about this off and on, and there's certainly interesting possibilities. You can imagine a cloud renderer that does something like a global scatter / photon mapping pass, while each client's session on the front end tier does an independent gather/render. Obviously there's huge problems to making something like this work practically, but just mention it as an example of the sort of more novel directions we should at least consider.


If the "metaverse" ever gets anywhere beyond Make Money Fast and reaches AAA title quality, running the client in "the cloud" may be useful. Mostly because the clients can have more bandwidth to the asset servers. You need more bandwidth to render locally than to render remotely.

The downside is that VR won't work with that much network latency.


TBH I don't think cloud gaming is a long term solution. It might be a medium term solution for people with cheap laptops but eventually the chip in cheap laptops will be able to produce photo realistic graphics and there will be no point going any further than that


Photo realistic graphics ought to be enough for anybody? This seems unlikely, there's so many aspects to graphical immersion that there's still plenty of room for improvement and AAA games will find them. Photo realistic graphics is a rather vague target, it depends on what and how much you're rendering. Then you need to consider that demand grows with supply, with eg. stuff like higher resolutions, even higher refresh rates.


There are diminishing returns. If a laptop could play games at the quality of a top end PC today, would people really want to pay for an external streaming service, deal with latency, etc just so they can get the last 1% of graphical improvements?

We have seen there are so many aspects of computing where once it’s good enough, it’s good enough. Like how onboard DACs got good enough that even the cheap ones are sufficient and the average user would never buy an actual sound card or usb dac. Even though the dedicated one is better, it isn’t that much better.


I think what you're missing is that

1) you still need to install and maintain it and there are many trends even professionally that want to avoid that

2) just cause you could get it many may not want it. I could easily see people settle for a nice M1 MBA or M1 iMac and just stream the games if their internet is fine. Heck, wouldn't it be nicer to play some PC games in the living room like you can do with SteamLink?

3) another comment brings a big point that this unlocks a new "type" of game which can be designed in ways that take advantage of more than a single computer's power to do games with massively shared state that couldn't be reliably done before.

I think to counter my own points: 1) I certainly have a beefy desktop anyways 2) streaming graphics are not even close to local graphics (a huge point) 3) there is absolutely zero way they're gonna steam VR games from a DC to an average residential home within 5 years IMHO.


I think the new macbooks are more a proof that cloud streaming won't be needed. Apple is putting unreal amounts of speed in low power devices. If the M9 Macbook could produce graphics better than the gaming PCs of today, would anyone bother cloud streaming when the built in processing produces a result which is good enough. I'm not sure maintenance really plays much of a part, there is essentially no maintenance of local games since the clients take care of managing it all for you.

Massive shared state might be something which is useful. I have spent some time thinking about it and the only use case I can think of is highly detailed physics simulations with destructible environments in multi player games where synchronization becomes a nightmare traditionally since minor differences cascade in to major changes in the simulation.

But destructible environments and complex physics are a trend which came and went. Even in single player games where its easy, they take too much effort to develop and are simply a gimmick to players which adds only a small amount of value. Everything else seems easier to just pass messages around to synchronize state.


> If a laptop could play games at the quality of a top end PC today, would people really want to pay for an external streaming service, deal with latency, etc just so they can get the last 1% of graphical improvements?

Think of it a different direction: if/when cloud rendering AAA graphics is practical, you can get a very low friction Netflix like experience where you just sit down and go.


IMO the service of netflix is the content library and not the fact it's streaming. If the entire show downloaded before playing, it would only be mildly less convenient than streaming it. But I don't think the streaming adds that much convenience to gaming. If your internet is slow enough that downloading the game beforehand is a pain, then streaming is totally out of the question. And gaming is way way less tolerant of network disruption since you can't buffer anything.

Cloud gaming seemingly only helps in the case when you have weak hardware but want to play AAA games. If we could put "good enough" graphics in every device, there would be no need to stream. And I think in 10 years probably every laptop will have built in graphics that are so good that cloud gaming is more trouble than its worth. It might sound unrealistic to say there is a good enough but I think a lot of things have already reached this point. These days screen DPI is largely good enough, sound quality is good enough, device weight/slimness is good enough, etc.


I'd (gently) say you may be generalizing your own behavior too much. I often just have say 45 minutes to kill and will just browse Netflix to find something to start immediately. Having to wait for a download would send me to something else most likely. Since COVID started, one thing I've heard repeatedly from friends with kids is they manage to carve out an hour or such for a gaming session, sit down, and then have to wait through a mandatory update that ends up killing much of their gaming session. Now add to that the popularity of game pass, and the possibility that "cloud console" offers something similar... there's plenty of people that would love that service imo.


Cloud gaming allows for more shared state and computationally intensive games (beyond just graphics). Maybe eventually clients will easily be able to render 4k with tons of shaders but the state they’re rendering could still be computed remotely. In a way that’s kind of what multiplayer games are like already


You loose one usb-c port to get all the others. I've never needed 4 usb-c ports at a time, but definitely have needed all the others. And one usb-c port is replaced by power, which many times I have had to use one for anyway. So whether you bought into full usb-c future or not I don't see the extra ports hurting you. Maybe a 'whatever', but don't see it as a reason to dislike it.


I guess my line of thinking is that Apple going full USB-C was one major reason that the port took off for computers (USB-Sticks, Drives, Monitors, etc.). It feels like they surrender after winning.

I never really cared for MagSafe but I'm also not super annoyed about it being there (just a old/new proprietary charging port), but the HDMI port...that creates the same feeling for me as if they had added a USB-A or VGA port honestly.


I don't really see it as surrendering at all. The vast majority of consumers don't need the power the Macbook Pros offer. I see this more as an (IMO overdue) acknowledgement of the target audience. The M1 iMac and the M1 Macbook only have USB-C ports.


Because your risk becomes everyones risk. The vaccine is becoming less and less effective against variants and people talk as if dying or hospitalizations is the only bad outcome. Long term health problems from even 'mild' COVID infections is a thing and the efficacy of vaccines at preventing that is not clear.

So getting a vaccine does not make the vaccinated isolated from the risks the unvaccinated choose. Your choices affect what I can do and the risks to my health.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: