I've been in Computer Science graduate courses and lectures in one of the top programs in the US where it seemed like every week the first 5 to 10 minutes was spent trying to get the projector to work with the lecturers laptop.
What amazes me on conference calls is how no one can explain audio issues. Really, really smart people on a call that can explain registers, cryptography, kernel tuning.
Sounds like you are having network issues/bit errors? You are breaking up.
Sounds like your bluetooth is having issues? You are breaking up.
Sounds like they are mobile, and in a weak service area? You are breaking up.
As a person who is an audio engineer, dsp programmer and supporting such systems professionally:
That is because the breaking up for the most part sounds very much the same. It is a CODEC specialized for low bitrate transmission going into low bitrate compressions, packets being dropped entirely or arriving with a delay etc.
Who knows where such a thing comes from? It could be any number of things on a huge technological stack distributed over a large geographical area.
A domain expert should of course be able to distinguish various bad conditions by ear, e.g. clipping, saturation, wromg microphone distances/orientation, signal interferences, hum, bad grounding etc.
But not all people are good at analytical hearing regardless of them being engineers or not. That is why people pay the likes of me.
To say the network guru should automatically know the cause of a packet drop is like to say the postal worker automatically knows why the letter never arrived, or a doctor automatically knows the cause of that cough.
This is mostly a consequence of the complexity of all the abstraction layers between "conference call" and "physical routed and switched connection". At best you might be able to identify that the visible/audible symptom is due to packet loss, but proving a root cause of the packet loss is extremely difficult. Through some tooling, like ThousandEyes (which I work on), you might be able to identify the hop in the path that's causing that forwarding loss, but unless you have access to that device it'd be impossible to prove exactly /why/ it has forwarding loss.
Any type of problem like that ultimately becomes a "5 Whys?" kind of troubleshooting to get to a real root cause, and from a end-user device you generally don't have the necessary access or data to answer more than 2-3 layers of abstraction.
I try to be more specific, but there's just so much going into modern sound processing. Told a guy once his bluetooth connection was failing, but the real issue was that he had a plane taking off overhead and the VC software was noise-cancelling everything.
Are those things distinguishable? I thought people say you're breaking up because they only see the symptom of audio cutting out and have no way to distinguish what the cause is.
Mediatec professional here: hard/impossible to distinguish especially, because knowing precisely which signal processing your video-conferencing solution might be doing under which conditions and in which version is even hard to know for people who analyze one version of software in a lab environment.
That is like a relative saying you are not an IT expert because you don't know immidiately what the distinct root cause of "the screen being black" is.
I just have a ChromeCast in each room that I lecture in. Easy to bounce a tab over to the big screen, and it turns on the TV/projector. It also helps prevent notification or other screen sharing related mishaps.
My biggest complaints about this approach are that I can't easily see speaker notes (I have a fiddly workaround that I can use if I need it, but it would be nice if Google Slides would support the ChromeCast use case a little better..) and that the TVs/projectors tend to screw around for 10-15 seconds before automatically choosing the right source.
I know I can open other tabs to look at the presentation, so I assume it would. But it's not good optics for me to have a phone out when I ask my students not to do so.
I can get a speaker notes and a normal presentation tab open and cast one of them, but they all jump around and do the wrong thing on my screen (wanting to maximize, etc) and require a lot of coercion; too much time to start a normal lecture.
Arguably it still hasn't matured yet. Or at least it's nowhere near to be a solved problem and we'll probably sidestep it before ever coming up with a real solution.
I'd put it in the same bucket as TODO lists. After centuries we're still not set.
Depends on the printer. I still remember the LaserJet 4000 series with great fondness. They were absolute tanks that almost never broke. You did have to replace wear items like rollers, but that was it.
I used to joke it was ironic how people in management would always have their presentations up and running almost immediately while programmers seemed to always struggle with projectors.
These days however nothing seems to work for anyone.
Technically yes, but since the WM usually comes as part of a desktop environment that also includes utilities and daemons which help with that sort of thing, to many folks "my WM makes a big difference in how easy it is to set up external monitors" is often true.
It drives me crazy that there's not a long running named process in e.g. GNOME the DE that if I run will cause my fn-volume keys on my laptop keyboard to change the volume any WM, the same as when I'm running GNOME? Why isn't that piece separated out? Or at least, where is the code so I can separate it out?
Questions and experiences like that cause many to conflate behavior and experience with WM/DE.
I once helped Andrew Tannenbaum get his laptop to work with the projector in the room. His presentation was only slightly delayed because of it, but the irony ...
The burden shouldn't be on users. Why are computer manufacturers putting out a hardware/OS combination that can't work reliably with an external display? Don't they even test this common use case?
Even my Mac (which, to Apple's credit, does work 100% reliably with projectors) still struggles when I plug in multiple external monitors, blanking the screen, turning on one external display, blanking them both, turning on the other one, and so on. This is a manufacturer who normally accepts only a smooth, polished user experience, and they still can't get it perfect. What chance do Lenovo or similar garbage-tier plastic box manufacturers have of getting it right?
It’s bad though in that it reduces your power over your browsing experience. We should get a choice on that. uBO is a good actor and I trust them. Also good crippled storage for lists in v3 while Firefox did not. Clearly it’s to limit size of Adblock lists on google’s part to make the adblockers more irrelevant and in their interest to put as many ads in your face as possible.
this is it exactly. They should not remove manifest v2, they should make it more explicit that an addon is v2 or v3, and let the end user choose (with the default being v3, and deny v2 addons).
When an untrustworthy addon asks to be a v2 addon, the user can be made more suspicious, but allow addons like ublock to remain working at full power.
Of course, the whole reason google did it is to remove effective adblocking.
stdlib templates are a bit idiosyncratic and probably not the easiest to start with, but they do work and don't have "weird issues" AFAIK. What issues did you encounter?
I don't know what issues others have had with it, but for me one notable thing is that html/template strips all comments out. This is by design, but it's not documented anywhere. I've proposed making this configurable, but my proposal has gotten no traction so far.
I am just trying Templ. I like what I am seeing for the most part. There are some tooling ergonomics to work out. Lots of "suddenly the editor things everything is an error and nothing will autoimport or format" back to mostly working. Click to definition goes to the autogenerated code instead of the templ file. Couple things like that. But soooooooooo much better to deal with code gen than html/template. That thing is a pita
passing data to templates that call templates that (maybe call other templates that) use the data. It is easy to call things in the wrong order, not provide the right values, think you have access to some data and totally don't, there is no type help, there is a bit of ceremony to get functions available, and I'm sure there is something else I'm forgetting. Just overall, a pain to work with.
So far, I'm enjoying in Templ that I can clearly see what arguments and types are passed to whichever views/partials and that I can simply use standard Go functions to do whatever I need them to do.
Oh wow, it was not just some guy publishing fradulent papers in fradulent journals that nobody reads or cites. He had giant impact, was cited tens of thousands of times!
Isn't it pretty niche to not want discussions of winter or alignment? I guess you can go read Nick Land? If there's not at least a mini-winter or alignment some time soon it's going full Nick Land, right?
What I mean is, Nick Land is the only person I know of who can at least sort of credibly claim to have a theory for why alignment isn't just not guaranteed, but is in fact impossible, and there's ~no chance of a lasting winter.
Even then isn't it irresponsible to avoid talk about progress from other labs vs the probability of winter? Either way you could get wiped out hard, investment wise and product/startup wise.