Why does that make your life barely enjoyable? Curious to know what kind of lifestyle or living you have that the weather would have such a big impact on you.
I'm relatively new to programming and had a question about TypeScript's functionality. Is there any specific reason why TypeScript doesn't allow for the creation of custom and intricate data types? For example, I'm unable to define a number type within a specific range, or a string that adheres to a certain pattern (like a postal code).
I'm imagining a language where I could define a custom data type with a regular function. For instance, I could have a method that the compiler would use to verify the validity of what I input, as shown below:
function PercentType(value: number) {
if (value > 100 || value < 0) throw new Error();
return true;
}
Is the lack of such a feature in TypeScript (or any language) a deliberate design decision to avoid unnecessary complexity, or due to technical constraints such as performance considerations?
You could trivially define a `parsePostalCode` function that accepts a string and yields a PostalCode (or throws an error if it's the wrong format).
Ranges like percent are much trickier—TypeScript would need to compute the return type of `Percent + Percent` (0 <= T <= 200), `Percent / Percent` (indeterminate because of division by zero or near-zero values), and so on for all the main operators. In the best case scenario this computation is very expensive and complicates the compiler, but in the worst case there's no clear correct answer for all use cases (should we just return `number` for percent division or should we return `[0, Infinity]`?).
In most mainstream programming languages the solution to this problem is to define a class that enforces the invariants that you care about—Percent would be a class that defines only the operators you need (do you really need to divide Percent by Percent?) and that throws an exception if it's constructed with an invalid value.
This is a feature some (experimental) programming languages have - look into dependent types. The long-and-short of it is that it adds a lot of power, but comes at an ergonomic cost - the more your types say about your code, the more the type checker needs to be able to understand and reason about your code, and you start to run up against some fundamental limits of computation unless you start making trade-offs: giving up Turing-completeness, writing proofs for the type checker, stuff like that.
Another interesting point of reference are "refinement types", which allow you to specify things like ranges to "refine" a type; the various constraints are then run through a kind of automated reasoning system called an SMT solver to ensure they're all compatible with each other.
> Is the lack of such a feature in TypeScript (or any language) a deliberate design decision to avoid unnecessary complexity, or due to technical constraints such as performance considerations?
It makes a lot of things impossible. For example, if you defined two different types of ranges, OneToFifty and OneToHundred similarly to your PercentType above, the following code would be problematic:
let x: OneToFifty = <...>;
let y: OneToHundred = <...>;
y = x;
Any human programmer would say the third line makes sense because every OneToFifty number is also OneToHundred. But for a compiler, that's impossible to determine because JavaScript code is Turing-complete, and so it can't generally say that one is certainly a subset of the other.
In other words, any two custom-defined types like that would be unassignable from and to each other, making the language much less usable. Now add generics, co-/contravariance, type deduction, etc., and suddenly it becomes clear how much work adding a new type to the type system is; much more than just a boolean function.
That said, TypeScript has a lot of primitives, for example, template string types for five-digit zip codes:
type Digit = '0' | '1' | '2' | '3' | <...> | '9';
type FiveDigitZipCode = `${Digit}${Digit}${Digit}${Digit}${Digit}`;
(Actually, some of these are Turing-complete too, which means type-checking will sometimes fail, but those cases are rare enough for the TS team to deem the tradeoff worth.)
It's the fundamental programming language design conundrum: Every programming language feature looks easy in isolation, but once you start composing it with everything else, they get hard. And hardly anything composes as complexly as programming languages.
There's sort of a meme where you should never ask why someone doesn't "just" do something, and of all the people you shouldn't ask that of, programming language designers are way, way up there. Every feature interacts not just with itself, not just with every other feature in the language, but also in every other possible combination of those features at arbitrary levels of complexity, and you can be assured that someone, somewhere out there is using that exact combination, either deliberately for some purpose, or without even realizing it.
type Enumerate<N extends number, Acc extends number[] = []> = Acc['length'] extends N
? Acc[number]
: Enumerate<N, [...Acc, Acc['length']]>
type NumberRange<F extends number, T extends number> = Exclude<Enumerate<T>, Enumerate<F>>
type ZeroToOneHundred = NumberRange<0, 100>
One limitation is that this has to be bounded on both ends so constructing a type for something like GreaterThanZero is not possible.
Similarly for zip codes you could create a union of all possible zip codes like this:
type USZipCodes = '90210' | ...
Often with the idea you have in mind the solution is to implement an object where the constructor does a run time check of the requirements and if the checks pass instantiate the instance and otherwise throw a run time error.
In functional programming this is often handled with the Option which can be thought of as an array with exactly 0 or 1 elements always. 0 elements when a constraint is not met and 1 element when all constraints are met.
This [0] is a library I wrote for JS/TS that provides an implementation of Options. Many others exist and other languages like Rust and Scala support the Option data structure natively.
I switched from QWERTY to Colemak about 5-10 years ago for a solid year or so.
My WPM decreased by around 25% and I actually found Colemak to be rather uncomfortable; with QWERTY (and DVORAK) you tend to alternate stroke between hands. Even if there is more finger travel, it just feels right to me.
Also, having a different layout than the peers around you is an absolute pain.
My conclusion is that having an alternative layout is not worth the marginal improvements if any it may offer. If I was forced to try another layout though, I would try DVORAK.
I’m not worried. There’s much more to the job than pedantic code reorganizing. As a matter of fact, it seems to be good at what I’d like not to do as a frontend dev.
As a "one of these day devs" I only care about performance when it starts becoming a problem and I see nothing wrong with the way I'm going about this.
> I see nothing wrong with the way I'm going about this.
The wrong part is that you don't measure performance. Which was OP's point. Just measuring the performance is very hard, labor-intensive, resource-intensive task. "One of these day devs" mostly don't even know how to approach this task, but even if they knew, the mountain of infrastructure they sit upon, which is in many cases completely opaque to them will make it impossible for them to be productive (or do anything at all) when it comes to estimating performance of their programs.
Add to this also the fact that most things when it comes to performance are, basically, out of your control. If the problem is in the framework -- maaaaybe you can replace / patch the framework. If the problem is in the browser -- with a 0.1% probability you might convince users use another browser. If that's to do with OS in which the browser is running -- well, you, yourself, probably won't install a different OS only to make your own program happier...
But, the complaint isn't about the "one of these day devs", it's about the infrastructure in which they live that made it, basically, impossible to care about performance.
The problem is that performance and reliability are issues that creep up and by the time they're a problem, they are much more expensive to fix than if they had been first class priorities from the outset. Everyone who didn't care before finds new reasons to put it off because now it's so expensive to address.
Performance issues are also often trivial to head off from the beginning but require a rewrite to remediate later.
To everyone that argues that premature optimisation is bad, that’s like saying we should go to the Moon by building a bus and then fixing any performance issues that prevent orbital insertion after it is moving down the highway successfully.
They could, but it would be to the detriment of their core business - engagement. Facebook doesn't want to fragment into million different servers the way that (for instance) Discord does, even though that is how a lot of their users might prefer.
Twitter has it even worse - their whole schtick is a huge chattering town hall. Advertisers and brands want this, users increasingly don't.
I tried to express this in a blog post[0] but the gist is that in their quest for engagement the big social media networks have evolved down a path that is not as attractive to users. The environment (the internet using population) is changing but the big social networks cannot change to meet new needs without ditching the things that make them money.
It feels as though the online community is about to circle back in a cyclical way, to semi-private forum-like communities. Atleast that's my wishful thinking. I remember being involved in a local bowmaking (archery) forum almost a decade back. The quality of posts there was something I remember fondly to this day.
(solopreneur and someone that works in a 4 people startup)