The bridge is part of I-695, which is a beltway: https://en.wikipedia.org/wiki/Interstate_695_(Maryland). Inner loop refers to the inner lanes (traveling clockwise), outer loop the outer lanes (traveling counterclockwise).
They aren't independent, the city will tend to build up along the transit lines due to the easy access to transit. E.g., the NYC outer boroughs and the DC metro area.
That may be true for immovable train lines, but vehicle transit such as buses have routes which are subject to change, and therefore developers cannot depend on the transit lines being there in 10-20 years.
Bus lines instead tend to follow where the traffic wants to go. Around here, many shopping malls double as bus stations because the primary aim of transit seems to be circulating consumers around places they will spend money.
Consider the income levels of the staff that work at shopping malls and what that implies for their ability to pay for reliable personal transportation.
> Bus lines instead tend to follow where the traffic wants to go. Around here, many shopping malls double as bus stations because the primary aim of transit seems to be circulating consumers around places they will spend money.
This is circular reasoning though. Bus lines go to the mall, but malls are built where the bus lines go. Malls use a lot of space, so carving out a little bit for transit is easy.
If I remember correctly, the open source ATI drivers were always a bit buggy and it wasn't that easy getting them installed either. The tradeoff was always Nvidia: proprietary but works well, ATI: open but buggy.
As far as I'm aware, since AMD took over, they've been fairly stable (although occasionally omitting support for the latest features until the next kernel release)
As a Navi 10 (5700 XT) owner, those problems still exist. It used to be that at least once a week while gaming the driver would crash with some undecipherable error message in dmesg, and because the card had the reset bug the only recourse was to reboot the machine entirely. 4 years later the only thing that's changed is that the crash shows up less frequently (I'd say once every 3 months).
> Have you ruled out power supply issues and are you running at stock clocks (for CPU and RAM as well)?
Yes for both. No overclocking whatsoever.
> Anyway, going from "at least once a week" to "once every 3 months" means that 90% of your crashes have been fixed.
I don't think I'm supposed to be ok with a device I paid premium money for crashing once every three months with no explanation from the manufacturer. They could've fixed 99% for all I care, it's still absurd that it's even an issue in the first place.
> What kind of message would you expect that would be more decipherable.
One that would lead me to an actual solution or at least an explanation, not just year old threads of people reporting this exact issue with replies saying it was fixed in kernel version X, where X is different for each thread.
The part about them being buggy is definitely true.
Up until somewhere around 2016-2017 the ATI/AMD drivers were really bad.
I had an "HD 7850" GPU on Linux around that time and it was barely usable. The performance was less than half of what you got on Windows, and the drivers would crash very often, sometimes several times a day if I was trying to play games like Team Fortress 2.
It was so bad that I decided to replace the HD 7850 with a new GTX 970 and decided to not buy anymore AMD GPUs for the indefinite future. The GTX 970 was stable and performed very well with the closed source drivers, and other than them being closed source I never had an issue with them. I always installed the closed drivers through the system package manager which handled all of the tricky stuff for me (Arch Linux maintains the nvidia driver as a system package and makes sure it runs on the current kernel before releasing it).
In modern times the situation has flipped though. I still haven't bought an AMD GPU since then but I am pretty sure my next one will be.
I agree; 2016-17 was about the turning point. I bought a Fury X around then, and it was flawless back then. In contrast, my old nvidia cards had become unusable.
On the AMD, FreeSync and HDMI audio didn't work at first. (For any card; the driver documentation said those features were a work in progress.)
Anyway, I unplugged it for a year or so, and recently plugged it back in. One apt get upgrade later FreeSync and HDMI audio just work.
It's gotten to the point where I'd opt for an ARM laptop over one without AMD or intel graphics. From what I can tell suspend resume doesn't work on intel CPUs (on windows or linux), so it's basically AMD GPU or no x86 at from a compatibility perspective. (Did AMD also eliminate S3 suspend, and not replace it with a working alternative?)
I also had a HD 7850, and though I had pushed it less than you I never noticed any huge issues.
It was in a uniquely terrible position of being one of the last cards released supported by radeon when all the development had moved to amdgpu, which it supposedly could run if you jumped through the right hurdles. I remember the xorg feature table having several things working for older and newer models but not the 7850.
Still, my experience with it led to another AMD card that I've also been quite happy with.
I belive this is talking about radeonhd/radeon/ati circa 2015 or earlier.
Around then, you still had to install the corresponding X11 portion of the drivers, though the nvidia eqiuvalent had the same limitation.
radeon/radeonhd, or fglrx (which was the propriertary AMD graphics) absolutely worked worse than nouveau or the proprietary nvidia drivers at that time. It was only a couple of years into amdgpu where the tables turned.
At this point it would be nice if they'd backport their Linux drivers to Windows, as I'm now on my third AMD GPU in 12-13 years (HD 5770, r9 290x, 6900XT) to have issues where the driver will randomly crash when playing hardware accelerated video on one monitor while playing a directx game on another monitor under Windows.
I'm pretty sure I needed to mess with xorg.conf and other settings to get things like screen resolution and Compiz working correctly. I don't know what part of the stack was responsible for those issues, but I thought it was related to the graphics driver.
I could be misremembering though, this was 15+ years ago now.
Those are client CPUs, which have very different behavior around power management than server parts. However, AVX downclocking has mostly gone away with ice lake and hopefully sapphire rapids does away with it permanently (except on 512 bit vectors).