Hacker Newsnew | past | comments | ask | show | jobs | submit | more ackfoobar's commentslogin

> the end result is seriously faster

Do you have a ballpark value of how much faster Rust is? Also I wonder if OxCaml will be roughly as fast with less effort.


Just the straight/naive rewrite was ~3 times faster for my benchmark (which was running the program on the real dataset) and then I went down the rabbit hole and optimized it further and ended up ~5 times faster. Then slapped Rayon on top and got another ~2-3x depending on the number of cores and disk speed (the problem wasn't embarrassingly parallel, but still got a nice speedup).

Of course, all of this was mostly unneeded, but I just wanted to find out what am I getting myself into, and I was very happy with the result. My move to Rust was mostly not because of speed, but I still needed a fast language (where OCaml qualifies). This was also before the days of multicore OCaml, so nowadays it would matter even less.


> straight/naive rewrite was ~3 times faster

How much of that do you think comes from reduced allocations/indirections? Now I really want to try out OxCaml and see if I can approximate this speedup by picking up low hanging fruits.


I would imagine most of it, because the program in question mostly does parsing and ETL.


> Sum types: For example, Kotlin and Java (and de facto C#) use a construct associated with inheritance relations called sealing.

This has the benefit of giving you the ability to refer to a case as its own type.

> the expression of sums verbose and, in my view, harder to reason about.

You declare the sum type once, and use it many times. Slightly more verbose sum type declaration is worth it when it makes using the cases cleaner.


> This has the benefit of giving you the ability to refer to a case as its own type.

A case of a sum-type is an expression (of the variety so-called a type constructor), of course it has a type.

  datatype shape =
      Circle of real
    | Rectangle of real * real
    | Point

   Circle : real -> shape
   Rectangle : real * real -> shape
   Point : () -> shape
A case itself isn't a type, though it has a type. Thanks to pattern matching, you're already unwrapping the parameter to the type-constructor when handling the case of a sum-type. It's all about declaration locality. (real * real) doesn't depend on the existence of shape.

The moment you start ripping cases as distinct types out of the sum-type, you create the ability to side-step exhaustiveness and sum-types become useless in making invalid program states unrepresentable. They're also no longer sum-types. If you have a sum-type of nominally distinct types, the sum-type is contingent on the existence of those types. In a class hierarchy, this relationship is bizarrely reversed and there are knock-on effects to that.

> You declare the sum type once, and use it many times.

And you typically write many sum-types. They're disposable. And more to the point, you also have to read the code you write. The cost of verbosity here is underestimated.

> Slightly more verbose sum type declaration is worth it when it makes using the cases cleaner.

C#/Java don't actually have sum-types. It's an incompatible formalism with their type systems.

Anyways, let's look at these examples:

C#:

  public abstract record Shape;
  public sealed record Circle(double Radius) : Shape;
  public sealed record Rectangle(double Width, double Height) : Shape;
  public sealed record Point() : Shape;
  
  double Area(Shape shape) => shape switch
  {
      Circle c => Math.PI * c.Radius * c.Radius,
      Rectangle r => r.Width * r.Height,
      Point => 0.0,
      _ => throw new ArgumentException("Unknown shape", nameof(shape))
  };
ML:

  datatype shape =
      Circle of real
    | Rectangle of real * real
    | Point
  
  val result =
    case shape of
        Circle r => Math.pi * r * r
      | Rectangle (w, h) => w * h
      | Point => 0.0
They're pretty much the same outside of C#'s OOP quirkiness getting in it's own way.


> The moment you start ripping cases as distinct types out of the sum-type, you create the ability to side-step exhaustiveness and sum-types become useless in making invalid program states unrepresentable.

Quite the opposite, that gives me the ability to explicitly express what kinds of values I might return. With your shape example, you cannot express in the type system "this function won't return a point". But with sum type as sealed inheritance hierarchy I can.

> C#/Java don't actually have sum-types.

> They're pretty much the same

Not sure about C#, but in Java if you write `sealed` correctly you won't need the catch-all throw.

If they're not actual sum types but are pretty much the same, what good does the "actually" do?


> Not sure about C#, but in Java if you write `sealed` correctly you won't need the catch-all throw.

Will the compiler check that you have handled all the cases still? (Genuinely unsure — not a Java programmer)


Yes

https://openjdk.org/jeps/409#Sealed-classes-and-pattern-matc...

> with pattern matching for switch (JEP 406)the compiler can confirm that every permitted subclass of Shape is covered, so no default clause or other total pattern is needed. The compiler will, moreover, issue an error message if any of the three cases is missing


Yes, that's the whole purpose of marking an interface/class `sealed`.


> With your shape example, you cannot express in the type system "this function won't return a point".

Sure you can, that's just subtyping. If it returns a value that's not a point, the domain has changed from the shape type and you should probably indicate that.

  structure Shape = struct
    datatype shape =
        Circle of real
      | Rectangle of real * real
      | Point
  end

  structure Bound = struct
    datatype shape =
        Circle of real
      | Rectangle of real * real
  end
This is doing things quick and dirty. For this trivial example it's fine, and I think a good example of why making sum-types low friction is a good idea. It completely changes how you solve problems when they're fire and forget like this.

That's not to say it's the only way to solve this problem, though. And for heavy-duty problems, you typically write something like this using higher-kinded polymorphism:

  signature SHAPE_TYPE = sig
    datatype shape =
        Circle of real
      | Rectangle of real * real
      | Point

    val Circle : real -> shape
    val Rectangle : real * real -> shape
    val Point : shape
  end

  functor FullShape () : SHAPE_TYPE = struct
    datatype shape =
        Circle of real
      | Rectangle of real * real
      | Point

    val Circle = Circle
    val Rectangle = Rectangle
    val Point = Point
  end

  functor RemovePoint (S : SHAPE_TYPE) :> sig
    type shape
    val Circle : real -> shape
    val Rectangle : real * real -> shape
  end = struct
    type shape = S.shape
    val Circle = S.Circle
    val Rectangle = S.Rectangle
  end


  structure Shape = FullShape()
  structure Bound = RemovePoint(Shape)

This is extremely overkill for the example, but it also demonstrates a power you're not getting out of C# or Java without usage of reflection. This is closer to the system of inheritance, but it's a bit better designed. The added benefit here over reflection is that the same principle of "invalid program states are unrepresentable" applies here as well, because it's the exact same system being used. You'll also note that even though it's a fair bit closer conceptually to classes, the sum-type is still distinct.

Anyways, in both cases, this is now just:

  DoesNotReturnPoint : Shape.shape -> Bound.shape
Haskell has actual GADTs and proper higher kinded polymorphism, and a few other features where this all looks very different and much terser. Newer languages bake subtyping into the grammar.

> If they're not actual sum types but are pretty much the same, what good does the "actually" do?

Conflation of two different things here. The examples given are syntactically similar, and they're both treating the constituent part of the grammar as a tagged union. The case isn't any cleaner was the point.

However in the broader comparison between class hierarchies and sum-types? They're not similar at all. Classes can do some of the things that sum-types can do, but they're fundamentally different and encourage a completely different approach to problem-solving, conceptualization and project structure... in all but the most rudimentary examples. As I said, my 2nd example here is far closer to a class-hierarchy system than sum-types, though it's still very different. And again, underlining that because of the properties of sum-types, thanks to their specific formalization, they're capable of things class hierarchies aren't. Namely, enforcing valid program-states at a type-level. Somebody more familiar with object-oriented formalizations may be a better person to ask than me on why that is the case.

It's a pretty complicated space to talk about, because these type systems deviate on a very basic and fundamental level. Shit just doesn't translate well, and it's easy to find false friends. Like how the Japanese word for "name" sounds like the English word, despite not being a loan word.


You wrote a lot of words to say very little.

Anyway, to translate your example:

    sealed interface Shape permits Point, Bound {}
    final class Point implements Shape {}
    sealed interface Bound extends Shape permits Circle, Rectangle {}
    record Circle(double radius) implements Bound {}
    record Rectangle(double width, double height) implements Bound {}
A `Rectangle` is both a `Bound` (weird name choice but whatever), and a `Shape`. Thanks to subtyping, no contortion needed. No need to use 7 more lines to create a separate, unrelated type.

> the Japanese word for "name" sounds like the English word, despite not being a loan word.

Great analogy, except for the fact that someone from the Java team explicitly said they're drawing inspirations from ML.

https://news.ycombinator.com/item?id=24203363


> You wrote a lot of words to say very little.

Substantiate this.

> weird name choice but whatever

I don't think this kind of snarky potshot is in line with the commentary guidelines. Perhaps you could benefit from a refresher?

https://news.ycombinator.com/newsguidelines.html#comments

> Thanks to subtyping, no contortion needed

I see the same degree of contortion, actually. Far more noisy, at that.

> No need to use 7 more lines to create a separate, unrelated type.

You're still creating a type, because you understand that a sum-type with a different set of cases is fundamentally a different type. Just like a class with a different set of inheritance is a different type. And while it's very cute to compress it all into a single line, it's really not compelling in the context of readability and "write once, use many". Which is the point you were making, although it was on an entirely different part of the grammar.

> Great analogy, except for the fact that someone from the Java team explicitly said they're drawing inspirations from ML.

ML didn't invent ADTs, and I think you know it's more than disingenuous to imply the quotation means that the type-system in Java which hasn't undergone any fundamental changes in the history of the language (nor could it without drastically changing the grammar of the language and breaking the close relationship to the JVM) was lifted from ML.


> Substantiate this.

You never gave an example how sum types in Java/Kotlin cannot do what "real" sum types can.

>> weird name choice but whatever

> snarky potshot

Sorry that you read snark. What I meant was "I find naming this 'Bound' weird. But since I am translating your example, I'll reuse it".

> You're still creating an unrelated type

How can a type participating in the inheritance hierarchy be "unrelated"?

> I see the same degree of contortion, actually. Far more noisy, at that.

At this point I can only hope you're a Haskeller and do not represent an average OCaml programmer.


PS rereading this I think "hope you're a Haskeller" might be read as an insult. That's not my intention, here's why I mention Haskell.

1. It's THE other language with a type system based on HM.

2. Variant constructors as functions. OCaml does not do that, Haskell does (slightly more elegant). This hints sunnydiskincali is more familiar with Haskell than OCaml.

3. I was confused by `type shape = S.shape`. How does `RemovePoint(Shape).shape` has the `Point` case removed then? I tried that on a REPL ^1 and it didn't even compile. Again, syntax errors hinting at Haskell experiences.

Well now I've written so much I may as well do a point-by-point refutation: ^2

> you create the ability to side-step exhaustiveness

Big claim, sounds scary to someone not familiar with sum types. But Java/Kotlin both enforce exhaustiveness. You could have provided an example in your second response, instead you dump a bunch of code that does not compile.

> Sure you can, that's just subtyping.

Then you followed up with an example that is not subtyping, but an unrelated type of a new set of new values.

> This is doing things quick and dirty. For this trivial example it's fine

This is not fine. I undersold the verbosity of your "quick and dirty" solution saying "7 lines". To actually work with those two types, the pair of conversion functions `Shape.shape -> Bound.shape option` and `Bound.shape -> Shape.shape` is needed.

> They're not similar at all.

~100 words in the paragraph, gestures to formalization, yet never explained how sum types implemented as sealed inheritance cannot be "enforcing valid program-states at a type-level". Thus my comment "a lot of words to say very little".

> You're still creating a type

I see you removed "unrelated" in an edit. The statement is now accurate but pointless. Of course I need to create a type, how else can I use the type system to say "this function won't return a point"?

> disingenuous to imply the quotation means that the type-system in Java ... was lifted from ML.

It would be more than disingenuous, colossally stupid even, if I did imply that. The wrongness would be on the level of claiming "English and Japanese are in the same language family".

Your cognate/false friend analogy is much smaller in scope, just like Java taking sum types (implementing them as sealed inheritance) from ML.

1: https://ocsigen.org/js_of_ocaml/toplevel/

2: https://xkcd.com/386/


I'm very embarrassed to say this. Those code examples weren't non-compiling OCaml, but valid SML. Once I remembered the existence of the language (in my defence it was never mentioned in the thread), I managed to compile the code, and confirm my suspicion:

`val point: Bound.shape = Shape.Point` type-checks, because `type shape = S.shape`. To drive the point home, so does

    val DoesNotReturnPoint : Shape.shape -> Bound.shape = fn x => x
So the module example does not show "this function won't return a point" as one would have hoped.


In the specific case of OCaml, this is also possible using indexing and GADTs or polymorphic variants. But generally, referencing as its own type serves different purposes. From my point of view, distinguishing between sum branches often tends to result in code that is difficult to reason about and difficult to generalise due to concerns about variance and loss of type equality.


Unless you reach an unsound part of the type system I don't see how. Could you provide an example?


- You can use GADTs (https://ocaml.org/manual/5.2/gadts-tutorial.html) and indexes to give a concrete type to every constructors:

  ```ocaml
  type _ treated_as = 
   | Int : int -> int treated_as
   | Float : float -> float treated_as

  let f (Int x) = x + 1 (* val f : int treated_as -> int *)
  ```
- You can use the structurale nature of polymorphic variants (https://ocaml.org/manual/5.1/polyvariant.html)

  ```ocaml
  let f = function 
  | `Foo x -> string_of_int (x + 1) 
  | `Bar x -> x ^ "Hello"
  (* val f : [< `Foo of int | `Bar of string] -> string` *)

  let g = function
  | `Foo _ -> ()
  | _ -> () 
  (* val g : [> `Foo of 'a ] -> unit *)
  ```
(Notice the difference between `>` and `<` in the signature?)

And since OCaml has also an object model, you can also encoding sum and sealing using modules (and private type abreviation).


Oh if you use those features to express what "sum type as subtyping" can, it sure gets confusing. But it's not those things that I want to express that are hard to reason about, the confusing part is the additions to the HM type system.

A meta point: it seems to me that a lot of commenters in my thread don't know that vanilla HM cannot express subtypes. This allows the type system to "run backwards" and you have full type inference without any type annotations. One can call it a good tradeoff but it IS a tradeoff.


Yes and my point was, when you want what you present in the first comment, quoting my post, you have tools for that, available in OCaml. But there is cases, when you do not want to treat each branch of your constructors "as a type", when the encoding of visitors is just rough. This is why I think it is nice to have sum type, to complete product type. So i am not sure why we are arguing :)


> So i am not sure why we are arguing :)

I think we agree on a lot of points. The rest is mostly preferences. Some other comments in my thread though...


Ok! (BTW, thanks for the interaction!)


I'm not sure why people are debating the merits of sum types versus sealed types in response to this. I prefer functional languages myself, but you are entirely correct that sealed types can fully model sum types and that the type level discrimination you get for free via subtyping makes them slightly easier to define and work with than sum types reliant on polymorphism.

Operationally these systems and philosophies are quite different, but mathematically we are all working in more work less an equivalent category and all the type system shenanigans you have in FP are possible in OOP modulo explicit limits placed on the language and vice versa.


> I'm not sure why

Me neither.

> you are entirely correct that sealed types can fully model sum types

I want to be wrong, in that case I learn something new.


> Slightly more verbose sum type declaration is worth it *when it makes using the cases cleaner.*

Correct. This is not the case when you talk about Java/Kotlin. Just ugliness and typical boilerplate heavy approach of JVM languages.


> Just ugliness and typical boilerplate heavy approach of JVM languages.

I have provided a case how using inheritance to express sum types can help in the use site. You attacked without substantiating your claim.


Kotlin's/Java's implementation is just a poor man's implementation of very restricted set of real sum types. I have no idea what

> This has the benefit of giving you the ability to refer to a case as its own type.

means.


> I have no idea

I can tell.

Thankfully the OCaml textbook has this explicitly called out.

https://dev.realworldocaml.org/variants.html#combining-recor...

> The main downside is the obvious one, which is that an inline record can’t be treated as its own free-standing object. And, as you can see below, OCaml will reject code that tries to do so.


That's for embedded records. You can have the same thing as Kotlin but with better syntax.


If you don't do inline records you either

- create a separate record type, which is no less verbose than Java's approach

- use positional destructuring, which is bug prone for business logic.

Also it's funny that you think OCaml records are "with better syntax". It's a weak part of the language creating ambiguity. People work around this qurik by wrapping every record type in its own module.

https://dev.realworldocaml.org/records.html#reusing-field-na...


You mistyped "backwards compatible change" going back to close to 3 decades.


> If you want speed

> If you want to be efficient

Funny that you assume the best position of the trade off continuum isn't somewhere in the middle for most people. Besides, for developer efficiency, I prefer a language where I don't have to constantly worry if the type system is defeated at runtime.


The best position in the middle is the combination of Python and C. I don't know why people are so aghast about writing small C programs, compiling them, and launching them with Python through an os call.

>I prefer a language where I don't have to constantly worry if the type system is defeated at runtime.

If you are doing this with Python, you are doing something very wrong, even without mypy. As for NodeJS, just use Typescript.


> The best position in the middle is the combination of Python and C.

This is an opinion of which many would disagree, for various legitimate reasons, yet appears to be the polyglot approach you prefer. So let's briefly explore it.

> I don't know why people are so aghast about writing small C programs, compiling them, and launching them with Python through an os call.

There are significant limitations to using fork[0]/exec[1] as a general purpose component integration strategy, not the least of which is the inability of fine-grained bidirectional interactions.

A better "Python and C" integration option is to employ SWIG[2] to incorporate C/C++ libraries directly into the Python execution environment.

0 - https://man.freebsd.org/cgi/man.cgi?query=fork&apropos=0&sek...

1 - https://man.freebsd.org/cgi/man.cgi?query=execve&sektion=2&a...

2 - https://swig.org/


You don't fork/exec everytime. You fork/exec once, and then use a standard C template for a select or epoll loop for a unix socket, and transport all the data that you need processed fast using that, with bidirectional comms.

Even more so, you can often time prototype in Python with rapid dev, and then when you want performance, you can translate it to pretty much whatever, including C, using LLMs that do a pretty good job. With coding agents, you can set them up to basically run the code side by side against a bunch of inputs and automatically fix stuff. We pretty much did this at our job to translate an internal API backend to a web server written purely in C, that was fully memory safe without any memory bugs.


As a Kotlin enjoyer, I find these comments counterproductive. Maybe they like the lack of extension functions?


Kotlin is fatter, compiler is slower, code completion is slow as hell on large projects, but other than building small applications - there's really no reason to not use kotlin except for the fact that you need to actually learn the language or else you're going to end up with very very slow codebase where opening a file and waiting for syntax highlighting takes 2-3 seconds and typing autocomplete is just painfully slow.


"fatter, compiler is slower, code completion is slow as hell" - if that's all you want out of your programming language, then Java is probably a good choice for you.

For others that value the things that Kotlin brings over Java (even modern Java), and for the ways in which it delivers a simpler experience than Scala - I think it's a pragmatic and sensible decision.


I do like the lack of extension functions. I find them confusing, especially when you can use them on things that are null.


I wonder if that confusion is due to the fact that you haven't yet wrapped your head around the fact that extension functions are "just" syntactic sugar for static functions. The implicit "this" becomes the the first parameter of the static function and function parameters can be null. Now you might ask "why not use static (/first class) functions then? Because those "feel" like less ideomatic to use then extension functions or methods that are defined on the object (hirachy) itself. But understanding why the extension type can be nullable is not the same is using it on nullable types. I restrict my extension functions to non-nullable types most of the time as well. The best exception to this preference -just to see where it makes sense- is the build-in function [toString](https://kotlinlang.org/api/core/kotlin-stdlib/kotlin/to-stri...), since you want it to return "null" if you invoke it on null.


I have wrapped my head around it. I think it's confusing to the reader and creates awkward semantics.


Yeah, extension functions are one of those features that went from 'oh, this is nice' to "this is so overused it's counterproductive".

It makes reading a lot of Kotlin source quite terrible.

Lately they've been shoveling a lot of similar magical "code comes from somewhere" features into a language, slowly giving it a C++ clutter type feel.


What I mean by that is this:

   val a: SomeType? = null
   // I’m forced to null check here
   if (a != null) {
       a.someMethodOnIt()
   }
   // But I don’t have to null check here
    a.someExtensionFn()
It’s weird.


> .NET is now cross platform, but only as long as it doesn't hurt VS sales, with GUI workloads, profilers, still being mostly Windows only, and partially supported on VSCode, which also has the same VS license.

On HN I keep hearing that associating .NET with Windows is outdated perception.

Writing JVM languages I feel that the developer experience is pretty much the same on any OS. It seems this cannot be said for .NET?


If you're writing a server or a web app then its good and runs well.

Visual Studio is still not ported to Linux or Mac, you need to use Rider or VSCode. If you use JetBrains for Java, using Rider will feel good no matter where you are.

The GUI library situation is a tough one. In many ways its far more advanced than other languages but their newest attempt is not as good as the older Windows only API. But what other language is graded for its great native GUI library?

I'm not calling MS cool but at the same time I think the goalposts are different.


I do not understand the hungup on visual studio.

We dont do the same for java, rust, or c… there are good IDEs for each of them and none are made by the maintainers of the language.


Java IDEs have historically been made by maintainers of the language.

Netbeans was a product acquired by Sun, Sun Forte was its "Professional" variant in Solaris, and Oracle still takes care of it in the context of Solaris and Oracle Linux.

Eclipse was a rewrite from Visual Age products, originall written in Smalltalk, by IBM, and IBM keeps being a Java vendor with their own implementations.


I do get the sentiment to some degree. Part of it is that Microsoft does have a conflict of interest as an OS vender. They do need to show that they aren't/won't be abusing that. That does put them in a position where they're asked to go above and beyond as a form of litmus test.


Re: GUI library situation, are you implying that they finally came up with something that's cross platform? What is it?


They tried, by forking Xamarin Forms into MAUI, and even then they ignored Linux. It's really rough though, to the point many projects use it as just a glorified webview for Blazor. I expect it to eventually go into a silent maintenance mode along with WinUI 3.

Avalonia is the go-to library for cross-platform UI in .NET right now. But Microsoft's own apps have been switching to web stacks, in a clear case of "Do as I say, not as I do."


There is actually a much better but less well-known open source library in .NET: Avalonia. Look it up their gallery of apps. Avalonia is the cross platform version of Windows Presentation Foundation (WPF) libs. It is quite good for desktop apps and many commercial pieces of software uses it.


MAUI apparently has Windows, Mac and Mobile support but no distro Linux support (unless Wine counts). You could use the web stack to be truly cross platform.


The server deploy experience for .NET is pretty much the same on Windows or Linux. The developer tooling experience has more options on Windows.


It can. DX is pretty much the same for backend and CLI stuff using VS Code on Mac, Linux and Windows. I'm working daily on C# backend and CLI stuff on a Mac (those are the dev machines at my employer). DX is on par with Go and Rust (at least dotnet CLI, LSP, Debugger, I can't speak for the profiler as I've never used it). I like the Rust tooling most, but dotnet CLI is not far behind.

Language and std lib wise, C# sits in the sweet spot.


Mh, I'm not the most experienced guy with .NET.

We have a few .NET applications running on the infrastructure on Linux hosts and it's just like every other thing.

But in some contexts, e.g. PowerBI, it pulls in a dependency and BOOM it's Windows Only to the point that not even Wine or Proton can help you. For something, that should be, mind you, a dumb SQL proxy like the PowerBI Embedded Gateway.


I think the success of Proton and Wine in games clouds the vision of Linux community. The contributors did great work on them. However the gaming API of Windows is a very limited slice of the vast API.

Games are quite standalone programs they don't depend deeply integrated Win32 stuff. They don't even use standard UI stuff from Win32. With Vulkan, porting DirectX became very viable and that was the grunt work. There are no DCOM servers or OLE stuff in games which is where Windows API actually becomes huge and sometimes nastier. Business apps however deeply depend on those.


Pretty much no, it can't be said for .Net.

It currently supports Linux as a running target for servers. It supports both running desktop software and development very badly.


It supports Linux as a running target for console apps, which can be servers, background apps, systemd apps, etc. So everything except UI apps.

The development experience with Rider is also great on Linux. I think you need to be more specific with the complaints because I have many beefs with Microsoft's approach to many things, but I could not pick up on what you meant.


You can use Avalonia to develop cross-platform apps with .NET.

GUI stuff from Windows depends deeply on Win32 and how Windows's core APIs work. So expecting Microsoft to port stuff like .Net Windows Forms is meaningless. They are open source though. Maybe with some completion effort Wine can run them.


I'm an Avalonia UI user myself, but didn't want to mention it since Microsoft themselves have done nothing to contribute to its existence. The UI rendering for Avalonia on Linux is not a Microsoft technology so I think that praise should go to the Avalonia team and whoever is developing Skia (Google?).


Can run SDL on linux and macos just fine, rendering visuals to the screen in X or Wayland.


> The watch is simply missing the two 5.1k resistors connecting the CC1 and CC2 pins of the USB-C connector to ground that are required to indicate to whatever is plugged in that it wants 5v power.

This is so annoying. Back when USB-C was less prevalent, I bought a pair of wireless earbuds over another for the same reason as the title - because it used USB-C. But then I cannot charge it with my macbook, unless I add a USB-C to USB-A adapter.


This problem seems prevalent on cheaper devices. When I buy a device and discover it has this problem I always return it. I've seen it on the Hypervolt Go 2 (which I returned and replaced with a Theragun Mini) and on the Hitachi Magic Wand Micro (which I replaced with a Dame Dip).

Like the post mentions, I think this happens because the devices are missing two resistors that are needed to indicate, when connected via a USB-C to USB-C cable to a charging brick, that the device wants 5V power. Resistors are cheap and I think the only reason they get dropped is carelessness.

The whole point of USB-C is that you can charge any device with any power supply.


> This problem seems prevalent on cheaper devices.

I’ve seen it on plenty of higher-end devices as well; and even worse.

The worst offender I’ve encountered is the TermoWorks Billows. ThermoWorks is a well established brand that makes high end thermometers and is considered one of the best on the market. So I was quite surprised to discover how their ‘Billows’ product is powered.

The device itself needs 12v and has a USB-C port for power. You’d think it would do USB-PD to negotiate it’s power needs so you can just use any old USB-C adapter. Not the case. It comes with a USB-A to USB-C cable and requires a special adapter with a USB-A port on it that puts 12v on the pins that normally supply 5v.

I have no idea how they came up with this abomination. Why even use USB-A connectors if it’s not going to work with a standard USB-A adapter, and why supply an adapter that’s basically going to kill most USB-A devices you plug into it? If you have a custom adapter anyway, why not just use a simple barrel connector? Why put a USB=C port on the device if it can’t use USB-PD?

I can imagine some Chinese ali-express product using such an abomination to save a few cents on components, but why would a well-respected brand like ThermoWorks ship such a thing? It boggles the mind.


I've seen even worse. I was upgrading an old device that had a 12v barrel connector, and was happy to see the new one used USB-C instead.

It came with a power brick that I happened look at and noticed that the output voltage was listed simply as 12v (instead of all possible outputs like usbc bricks normally do). I hooked it up to a USB-PD breakout board I had and tested it. Sure enough, it output at 12v regardless of what is asked for.

Luckily, the device itself actually did USB-PD, so I was able to throw away that monstrosity before it fried anything. Annoyingly, the device only supported 12V, which is hot or miss on being supported by chargers, but at least a mismatch there isn't going to fry anything.


So it's based on Qualcomm Quick Charge? QC is a competing, slightly older, slightly simpler standard to USB-PD that can do what you described. It's...useful sometimes.


No, it’s not based on anything. A QC charger will output 5v by default and only increase the voltage after a negotiation. This is exactly as described: a USB-A style charger brick only it outputs 12v instead of 5v, no negotiation, nothing preventing you from plugging in a device expecting 5v and getting 12v. The only ‘safety feature’ is that it has ‘12V’ printed on it.

You can find it here: https://www.thermoworks.com/12volt-ac-adapter/


ThermoWorks products are made in China

and marketed to gullible Americans.


There are high-end brands that spec products to a high standard and have them made (to that standard) in China. But I agree, GP is confusing high-price with high-end.

That said, thermometry is pretty easy and well understood and you don't need crazy accuracy for cooking, so 'low-end' is fine really, just don't pay high-price for it.


The year is 2025 not 1985. If you pay them the Chinese can make you anything you want.

The thing is that there are lot of dollar stores in the West that want cheap shit for the paupers. And that is were the bad reputation comes from.


I think there's a (wrong) expectation that an American manufacturer would idiot-proof their products and not do anything dumb like double the voltage while keeping the connector the same.

The number of times I've heard people complain about "cursed" M-M 3 prong AC power cables suggests that there is no amount of idiot-proof proofing that will keep a determined American safe from themselves.


Yes, not sure if you meant that to be disagreeing, but I completely agree. China has is at par with if not surpassing the most advanced manufacturing capability of anywhere else in many areas. It can just also offer very cheap poor tolerance mass produced crap.


> ThermoWorks products are made in China

Not sure what you are trying to imply here. Products manufactured in China are of poor quality? iPhones are made in China and it would be a challenge to find any device with higher build quality than that. On the flip side, we all know how terrible the quality of US made cars is.


This happens because these devices had USB microB before and the manufacturer just replaced the port without reading the spec.

Even some mainstream products have this issue. I have an automatic door opener from a large company and the battery pack has the same issue. It is shipped with a special cable you have to use as no other USB-C cable works.


There is also another problem. The spec is large and it's not aimed at those who want to implement the simplest possible USB C compliant device.

Based on the table of contents the most promising section is "2.3.4 USB Type-C VBUS Current Detection and Usage" but it doesn't actually talk about anything you actually need. You're supposed to click through to the section "4.6.2.1 USB Type-C Current" where it shows the reference circuit, but it doesn't tell you the values of Rd, which are in section "4.11.1 Termination Parameters".

It's a 300+ page document where you must already know what you're looking for. If you didn't already know that you need two resistors, you wouldn't be able to figure it out with the spec alone.


Sounds like an "annotated spec" or some guides for implementers would be really useful.


When you use a well-documented chip, the datasheet will contain diagrams and they'll have a working demo board which they'll give you the full schematic for. Closer to 3 pages than 300.

Of course, a person can still get it wrong...



This is insanely common.

I have about 6 devices with this problem, and I consider it unforgivable.

Not only did you not include USBC charging, you went out of your way to trick me and lie and pretend you did. I would have preferred just using micro usb at that point.

Powkiddy committed fraud and said the RGB30 can charge from USB-C, but they lied, it can only charge from USB A to C cables. Using it is a massive pain because I have to get adapters I shouldn't need. I'll never buy anything from them ever again.


I feel like the USB committee might be somewhat to blame. When most people think USB-C they're just thinking the cable. Why can't it just do regular slow charging with C to C cable?


It can, it just needs the two resistors, which is the cheapest possible thing the standards committee could have asked manufacturers to do.

USB-C gets complicated at the high end, but for basic functionality I think the standards committee did a very good job at making the cheapest way to do it the correct way, e.g. a USB-C to 3.5mm audio adaptor can be entirely passive, it just needs the right resistor in it.


Then a lot of phones don't support it, so it took me three attempts to find a usb-c to 3.5mm adapter that didn't have it's own DAC that would work with my phone's FM radio lol


Audio Adapter Accessory Mode was deprecated last year so devices using it will be disappearing.


Do you mind sharing? I was looking for something like this a couple years ago.


>e.g. a USB-C to 3.5mm audio adaptor can be entirely passive, it just needs the right resistor in it.

How does that work? is each USB-C host port, or downstream USB-C hub port required to contain a stereo DAC? Does the standard impose performance requirements like dynamic range, noise, minimum sample rate,...? Does it also mandate the jack can be used for mic / line-in? Does it similarily stipulate inclusion of an ADC in each port?


It doesn't mandate any of that, it's an optional feature.

The data pins are repurposed for analog audio, so it won't work with hubs. You'd of course need a DAC for output and an ADC for mic input, but the point is to replace a headset jack, so you'd have those already.

https://www.usb.org/sites/default/files/USB%20Type-C%20Spec%... (PDF, page 309)


The PCB designer could simply type "Type-C电路图" into Baidu and follow the instructions in the top result. But they couldn't be bothered.


Maybe, but there's no good excuse for this making it past the prototyping phase. If nobody plugged it into a USB-C power supply and noticed it doesn't work, that's negligent.

By 2019 or so, when USB-C was five years old, somebody on any product design team should have been aware this is a common problem and checked for it when selecting components.


It's not the usb c committee problem, the devices you are buying are out of spec

This is because the cable is 2 sided so it can't assume polarity

So it's a tradeoff for not having to guess how to insert the cable


>This is because the cable is 2 sided so it can't assume polarity

Not really. The USB-C connection pinout is symmetric about a 180 degree rotation, at least as far as power connections go. It's entirely possible (and common, e.g. when using passive converters) to just put power out of it constantly. The main reason for the signaling resistors is to avoid having power presented on the pins when it's not connected, which is more about avoiding corrosion or wear due to small sparks on connection.


And to avoid having two sources (perhaps with slightly different voltages) connected together and leading to hijinks. E.g. a usb A-C cable plugged into a USB-C power supply.


By 2-sided, OP probably means that the problem is that the cable has two USB-C ends, not that the USC-c connector is symmetric.

If you have an A end and a B or C end, you can assume that the device on the A end is supplying power and the device on the B or C end is consuming power without breaking anything. The A end cannot supply power to it's device by design, so an A to C cable cannot be used to power the A device from the C device, regardless of whether the device on the C end can supply power.

But if you have two C ends, you need some way to establish which device is the supply device and which is the consuming device, because the cable can be used to connect two devices which both can supply power (e.g. a laptop and a phone).


> This is because the cable is 2 sided so it can't assume polarity

To clarify (and to tell my own tale on the topic):

The power pins on both sides should be connected in both a plug and in a socket. However, when it comes to the USB 2.0 data pins only the socket end must be double-sided (short A6 to B6 and A7 to B7).

Back when "Type C" was new, I wanted to build a project with it, so I got one of the first socket breakout boards available. I built a mechanical keyboard out of aluminium with a slot milled to fit that breakout board. After everything was painted and soldered did I plug it in and it did not work ... It took me a while of troubleshooting before I retried it with the cable plugged in the other orientation. The breakout board had connected only A6/A7. B6/B7 were not available.


And if you completely discharge the powkiddy you can't charge it anymore, unless you open it up and physically disconnect the battery, plug the charger in, and then the battery back in.


> I have about 6 devices with this problem, and I consider it unforgivable.

If you still have them, you've forgiven it.

Return them and complain about it, or the manufacturers have no way to tell it bothers you.


The RGB10 Max 3 Pro has the same issue, kinda annoyed with that since my new battery pack is USB-C only..


This was exactly my complaint when the USB C standards were coming in - having a universal connector means nothing I you need a specific cable and/or power supply to charge it. You might say it’s not spec compliant and that’s fine - but it’s still a USB C port. We’d all be better off if they had just kept it as micro usb because at least then I’d _know_ I need a different cable for it


Here's a simple thing that will fix cheapo electronics with this problem: https://www.tindie.com/products/edison517/usb-chyna/

(it just connects CC1 + CC2 with the appropriate 5.1k resistors)


Would it be possible to build some kind of adapter or C-to-C cable that just contains the missing resistors? (And also probably would have to block any USB PD communication, in case you plug in any device that actually does try to use PD. So the goal would be that the charger always sees a 5V requesting device without PD support while the device always sees a "dumb" 5V charger - regardless of what capabilities the device and charger really have)

It would still suck to have to use a special cable for charging, but at least it's better than not being able to use any modern charger.


Sure, just grab a C-to-A adapter and A-to-C cable. Doesn't block communication though, you could block it by using a 2-wire A-to-C cable.


OK, that's easier than I thought. And I think it should even block the PD communication as the CC line is not passed through.


> Give it a script

Ideally the build tool does that for you, e.g. `./gradlew run -t`.


> things that are a net improvement do not preclude other things that are net improvements.

That's a good framework to think about things. Going all-in on renewables implies keeping fossil fuels around, because storage tech is several breakthrough behind. Renewable proponents like to point out that every kWh not produced with CO2 emission is still a win.

Yet deploying renewables means they flood the market with cheap electricity when the weather is good, hurting the profit, thus viability, of (i.e. precluding) stable low-carbon sources (in other words I'm butt hurt about nuclear).

> The vast majority of offsetting schemes are little more than accountability laundering and on-paper games, not translating to any concrete offsetting in the real world.

A case I heard is that they count the carbon captured by planting trees, yet ignore it when the carbon is released back to the atmosphere in a wildfire.


A diesel train releases orders of magnitudes less CO2 than flights though.


> dataclasses are, um, classes

So is the case when you use `namedtuple`, which creates a new class. This is not an interesting gotcha.

Classes (the Python language construct) are how you implement records (the language-neutral concept) in Python.

It's ironic that the "There Is Only One Way to Do It" language has multiple bad ways to implement records though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: