One thing most of those lack is an easy way to share screen.
Now if anyone wants to differentiate their Discord alternative, they want to have most of discord functionalities and add the possibility to be in multiple voice chats (maybe with rights and a channel hierarchy + different push-to-talk binds). It's a missed feature when doing huge operations in games and using the Canary client is not always enough.
I’ve been self hosting Element Call and use it to call my girlfriend (and also used it with another friend a few nights ago). I’ve had a few problems where when starting the call it seems to not connect but just trying again works, and that’s really the only issue i’ve had that I can think of since setting up a TURN server (before that it would completely fail sometimes, but that’s not Element Call’s fault)
Thanks for sharing. I think the design of MatrixRTC (especially the scaling via hierarchical SFUs) looks promising. It's nice to see someone actually using it at this early stage, even if only for 1:1 calls.
I use MiroTalk for it. Within Element you can set up widgets (basically PWAs) and so you can call via Element’s built in Jitsi widget (or a more reliable dedicated Jitsi link) and then use MiroTalk to share screens. It is a LOT better, especially for streaming video.
In terms of ease of use, it’s like three clicks. Technically more than Discord, but it’s p2p streaming so it’s far nicer quality.
My question would be: what are the myriad other projects you tasked Opus 4.6 to build and it could not get to a point you could kinda-sorta make a post about?
This kind of headline makes me think of p-hacking.
> Like blameless postmortems taken to a comical extreme where one person is always doing some careless that causes problems and we all have to brainstorm a way to pretend that the system failed, not the person who continues to cause us problems.
Well, I'd argue the system failed in that the bad person is not removed. The root is then bad hiring decision and bad management of problematic people. You can do a blameless postmortem guiding a change in policy which ends in some people getting fired.
> You can do a blameless postmortem guiding a change in policy which ends in some people getting fired.
In theory maybe, but in my experience the blameless postmortem culture gets taken to such an extreme that even when one person is consistently, undeniably to blame for causing problems we have to spend years pretending it’s a system failure instead. I think engineers like the idea that you can engineer enough rules, policies, and guardrails that it’s impossible to do anything but the right thing.
This can create a feedback loop where the bad players realize they can get away with a lot because if they get caught they just blame the system for letting them do the bad thing. It can also foster an environment where it’s expected that anything that is allowed to happen is implicitly okay to do, because the blameless postmortem culture assigns blame on the faceless system rather than the individuals doing the actions.
agreed, the concept of a 'blameless' post mortem came from airplane crash investigation - but if one pilot crashes 6 commercial jets, we wouldnt say "must be a problem with the design of the controls"
So what do they say actually in aviation? There was a pilot suicide with the whole plane Germanwings Flight 9525, I find it more important the aviation industry did regulatory changes than the fact that (probably) "they blamed the pilot".
I think there are too many people that actually like "blaming someone else" and that causes issues besides software development.
Blameless postmortems are for processes where everyone is acting in good faith and a mistake was made and everyone wants to fix it.
If one party decides that they don’t want to address a material error, then they’re not acting in good faith. At that point we don’t use blameless procedures anymore, we use accountability procedures, and we usually exclude the recalcitrant people from the remediation process, because they’ve shown bad faith.
> Well, I'd argue the system failed in that the bad person is not removed.
This is just a proxy for "the person is bad" then. There's no need to invoke a system. Who can possibly trace back all the things that could or couldn't have been spotted at interview stage or in probation? Who cares, when the end result is "fire the person" or, probably, "promote the person".
This "random output machine" is already in large use in medicine so why exactly not? Should I trust the young doctor fresh out of the Uni more by default or should I take advises from both of them with a grain of salt? I had failures and successes with both of them but lately I found Gemini to be extremely good at what it does.
The "well we already have a bunch of people doing this and it would be difficult to introduce guardrails that are consistently effective so fuck it we ball" is one of the most toxic belief systems in the tech industry.
> This "random output machine" is already in large use in medicine
By doctors. It's like handling dangerous chemicals. If you know what you're doing you get some good results, otherwise you just melt your face off.
> Should I trust the young doctor fresh out of the Uni
You trust the process that got the doctor there. The knowledge they absorbed, the checks they passed. The doctor doesn't operate in a vacuum, there's a structure in place to validate critical decisions. Anyway you won't blindly trust one young doctor, if it's important you get a second opinion from another qualified doctor.
In the fields I know a lot about, LLMs fail spectacularly so, so often. Having that experience and knowing how badly they fail, I have no reason to trust them in any critical field where I cannot personally verify the output. A medical AI could enhance a trained doctor, or give false confidence to an inexperienced one, but on its own it's just dangerous.
There's a difference between a doctor (an expert in their field) using AI (specialising in medicine) and you (a lay person) using it to diagnose and treat yourself. In the US, it takes at least 10 years of studying (and interning) to become a doctor.
Even so, it's rather common for doctors to not be albe to diagonise correctly. It's a guessing game for them too. I don't know so much about US but it's a real problem in large parts of the world. As the comment stated, I would take anything a doctor says with a pinch of salt. Particularly so when the problem is not obvious.
This is really not that far off from the argument that "well, people make mistakes a lot, too, so really, LLMs are just like people, and they're probably conscious too!"
Yes, doctors make mistakes. Yes, some doctors make a lot of mistakes. Yes, some patients get misdiagnosed a bunch (because they have something unusual, or because they are a member of a group—like women, people of color, overweight people, or some combination—that American doctors have a tendency to disbelieve).
None of that means that it's a good idea to replace those human doctors with LLMs that can make up brand-new diseases that don't exist occasionally.
It takes 10 years of hard work to become a profound engineer too yet it doesn't prohibit us missing the things. That argument cannot hold. AI is already wide-spread in medical treatment.
An engineer is not a doctor, nor a doctor an engineer. Yes, AI is being used in medicines - as a tool for the professional - and that's the right use for it. Helping a radiologist read an X-Ray, MRI scan or CT Scan, helping a doctor create an effective treatment plan, warning a pharmacologist about unsafe combinations (dangerous drug interactions) when different medications are prescribed etc are all areas where an AI can make the job of a professional easier and better, and also help create better AI.
Nobody can (and should) stop you from learning and educating yourself. It however doesn't mean just because you can use Google or use AI, you think you can become a doctor:
Educating a user about their illness and treatment is a legitimate use case for AI, but acting on its advise to treat yourself or self-medicate would be plain stupidity. (Thankfully, self-medicating isn't as easy because most medication require a prescription. However, so called "alternate" medicines are often a grey area, even with regulations (for example, in India).
No, I'm not asking you spend $150, I'm providing you the evidence your looking for. Mayo Clinic, probably one of the most prominent private clinics in the US, is using transformers in their workflow, and there's many other similar links you could find online, but you choose to remain ignorant. Congratulations
The existence of a course on this topic is NOT evidence of "large use". The contents of the course might contain such evidence, or they might contain evidence that LLM use is practically non-existent at this point (the flowery language used to describe the course is used for almost any course tangentially related to new technology in the business context, so that's not evidence either).
But your focus on the existence of this course as your only piece of evidence is evidence enough for me.
Focus? You asked me for an evidence. I provided you with the one. And with the one which has a big weight on it. If that's the focus you're looking for then sure. Take it as you will, I am not here to convince anyone in anything. Have a look in the past to see how Transformers have solved the long standing problems nobody believed they are tractable up to that point.
LLM is just a tool. How the tool is used is also an important question. People vibe code these days, sometimes without proper review, but do you want them to vibe code a nuclear reactor controller without reviewing the code?
In principle we can just let anyone use LLM for medical advice provided that they should know LLMs are not reliable. But LLMs are engineered to sound reliable, and people often just believe its output. And cases showed that this can have severe consequences...
> I would kill for a true ambi five-button mouse to replace my old Microsoft Intellimouse, but I've run into the same problem, they just don't seem to exist anymore.
I was going to say Steelseries Sensei but it looks like those have been discontinued.
> I've literally never had the thought of "how do I influence other people." Why is that considered a valuable skill?
If you're a software developer you must have thought "current priorities are not right, we should do X for the users / Y to get better quality" and tried to influence your management to get those priorities moved. Maybe by starting a campaign with your users so the demands come from multiple services and not just you, or by measuring quality indicators and showing how what you want to implement would improve them etc.
That's why you want to start getting coffee with people, maybe go outside with the smokers. It can take months of "work" to get people to propose the idea you want done.
But this kind of influencing won't help your career.
> people simply MUST be doing self-guided experimentation
And self guided exploration is a skill in itself which you have to learn. You can experiment for years and get nothing of it because you don't even measure anything. You can find a local maximum and, not knowing the concept, never try something radically different.
I just don't want white (or any other color) van. Let's say - I have some idea for s10 in my head to make it interesting. No way to make Traffic or other Partner interesting car. It'll just look like DHL services in the end anyway.
I want it with all the pros and cons, just to try it.
Now if anyone wants to differentiate their Discord alternative, they want to have most of discord functionalities and add the possibility to be in multiple voice chats (maybe with rights and a channel hierarchy + different push-to-talk binds). It's a missed feature when doing huge operations in games and using the Canary client is not always enough.
reply