Hacker News new | past | comments | ask | show | jobs | submit | imron's comments login

Depends. For certain fields the pay is great and there’s a dearth of candidates.

For other fields there is also a dearth of candidates but the pay falls short and you’ll be leaving tens of thousands of dollars on the table compared to what you could get with other languages.


I want to love C++.

Over my career I’ve written hundreds of thousands of lines of it.

But keeping up with it is time consuming and more and more I find myself reaching for other languages.


Same. Luckily my team switched to Rust almost 100%. So I don't need to learn about the godforsaken coroutine syntax and what pitfalls they laid when you use char wrong with it or in which subset of calls std::range does something stupid and causes a horrible performance regression.

Bjarne has been criticized for accepting too many (questionable) things into the language even at the dawn of C++ and committee kept that behavior. Moreover they have this pattern that given the options they always choose the easiest to misuse and most unsafe implementation of anything that goes into standard. std::optional is a mess, so is curly bracket initialization, auto is like choosing between stepping on Legos or putting your arm into a spider-full bag.

The committee is the worst combination of "move fast and break things" and "not in my watch". C++98 was an okay language, C++11 was alright. Anything after C++14 is a minesweeper game with increasing difficulty.


> Bjarne has been criticized for accepting too many (questionable) things

He even writes that way in his own article... The quote from the last section of the introduction was hilarious, and actually made me laugh a little bit for almost those exact reasons.

BS, Comm ACM > "I would have preferred to use the logically minimal vector{m} but the standards committee decided that requiring from_range would be a help to many."


I went from being curious about C++, to hating C++, to wanting to love it, to being fine with it, to using it for work for 5+ years, to abandoning it and finally to want to use it for game development, maybe. It's the circle of life.


The masochist in me keeps coming back to c++. My analogy of it to other languages is that it’s like painting a house with a fine brush versus painting the Mona Lisa with a roller. Right tool for the job I suppose.


It's my job and career(well, C and C++) but I often try to avoid C++. Whenever I use it(usually writing tests) I go through this cycle of re-learning some cool tricks, trying to apply them, realizing they won't do what I want or the syntax to do it is awkward and more work than the dumb way, and I end up hating C++ and feeling burned yet again.


Yeah, it's a struggle. Keeping to a good subset often works out, though. I recognize the feelings. Best of luck. :)


Same here.

>>contemporary C++30 can express the ideas embodied in such old-style code far simpler

IMO, newer C++ versions are becoming more complex (too many ways to do the same thing), less readable (prefer explicit types over 'auto', unless unavoidable) and harder to analyse performance and memory implications (hard to even track down what is happening under the hood).

I wish the C++ language and standard library would have been left alone, and efforts went into another language, say improving Rust instead.


I have used auto liberally for 8+ years; maybe I'm accustomed to reading code containing it but I really can't think of it being a problem. I feel like auto increases readability, the only thing I dislike is that they didnt make it a reference by default.

Where do you see difficult to track down performance/memory implications? Lambda comes to mind and maybe coroutines (yet to use them but guessing there may be some memory allocations under the hood). I like that I can breakpoint my C++ code and look at the disassembly if I am concerned that the compiler did something other than expected.


I just wish they hadn't repurposed the old "auto" keyword from C and had used a new keyword like "var" or "let".

   #define var auto
   #define let auto


If we're going that route, how about

   #define var auto
   #define let const auto

?


I was thinking of having one or the other, but let as the const form is appealing. ;-)


Given how important backwards compatibility is for C++, it's either take over a basically unused keyword or come up with something so weird that would never appear in existing code.

Java solved this by making var a reserved type, not a keyword, but I don't know if that's feasible for C++.


E.g. `std::ranges::for_each`, where lambda captures a bunch of variables by reference. Like I would hope the compiler optimizes this to be the same as a regular loop. But can I be certain, when compared to a good old for loop?


To be fair std::ranges seems like the biggest mistake the committee allowed into the language recently.

Effectively other than for rewriting older iterators based algorithms to using new ranges iterators I just don't use std::ranges... Likely the compiler cannot optimise it as well (yet) and all the edge cases are not workes out yet. I also find it to be quite difficult to reason about vs older iterator based algorithm's.

for each would take a lambda and call the lambda for each iterator pair, if the compiler can optimise it it becomes a loop, if it can't it becomes a function call in a loop which probably isn't much worse... If for some reason the lambda needs to allocate per iteration it's going to be a performance nightmare.

Would it really be much harder to take that lambda, move it to a templated function that takes an iterator and call it the old fashioned way?


Yeah, the std::ranges implementation is a bit of a mess. The inability to start clean without regard for backward compatibility reasons limits what is possible. I think most people see how you could implement comparable functionality with nicer properties from a clean sheet of paper. It is the curse of being an old language.


There are sane approaches to dealing with this - e.g. epochs.

This wasn’t proven by the time c++11 was ready, but for c++20 and beyond it’s a shame they didn’t go with this.


Did you try the two version in Godbolt?


Just ban ranges lib, it is hot garbage anyway. The compilers are able to optimize lambdas fairly well nowadays(when inlined), I wouldn't be that concerned.


You don't 'have' to keep up with the language and I don't know that many people try to keep up with every single new feature - but it is worse to be one of those programmers for whom C++ stopped at C++03 and fight any feature introduced since then (the same people generally have strong opinions about templates too).

There are certainly better tools for many jobs and it is important to have languages to reach for depending on the task at hand. I don't know that anything is better than C++ for performance sensitive code.


I’ve been using c++ since the late 90’s but am not stuck there.

I was using c++11 when it was still called c++0x (and even before that when many of the features were developing in boost).

I took a break for a few years over c++14, but caught up again for c++17 and parts of c++20...

Which puts me 5-6 years behind the current state of things and there’s even more new features (and complexity) on the horizon.

I’m supportive of efforts to improve and modernize c++, but it feels like change didn’t happen at all for far too long and now change is happening too fast.

The ‘design by committee’ with everyone wanting their pet feature plus the kitchen sink thrown in doesn’t help reduce complexity.

Neither does implementing half-baked features from other ‘currently trendy’ languages.

It’s an enormous amount of complexity - and maybe for most code there’s not that much extra actual complexity involved but it feels overwhelming.


It’s okay to be a few years behind the standard, the compilers tend to be as well.


Yeah, the issue is more that the perceived complexity means I’m less interested in investing time to catch it all back up


If you already used C++20 you aren't meaningfully behind, very little of interest has been introduced since then, and much of it isn't usable yet because of implementation issues.


I’ve touched on some of c++20, but haven’t used it extensively.

Specifically here are areas I haven’t used that appear to have nontrivial amounts of complexity, footguns, syntax and other things to be aware of:

* Ranges * Modules * Concepts * Coroutines

Each of these is a large enough topic that it will involve time and effort to reach an equivalent level of competence and understanding that I have with other areas of c++.

I don’t mind investing time learning new things but with commentary around the web (and even this thread) calling the implementation and syntax a hot mess, at some point it’s a better investment to put that learning in to a language without all the same baggage.

I really wish c++ had gone with breaking change epochs for c++20.


I've been writing C++ since 1996-ish.

Less and less, for sure.

Nothing the past few years.

They killed it.


If you only read HN, you would think C++ died years ago.

As someone who worked in HFT, C++ is very much alive and new projects continue to be created in it simply because of the sheer of amount of experts in it. (For better or for worse)


Can also confirm c++ is alive and well at FAANG. Might still be the most popular language for most new projects.


* for some values of FAANG

C++ has been dead and effectively banned at amzn for years. Only very specific (robotics and ML generally) projects get exemptions. Rust is big and only getting bigger


Fair! I would say people would be surprised to learn pretty much every large AI project is mostly c++ because of its interop with python.

Some FAANGs focus on AI more than others.


The fact that we don't have a viable alternative yet doesn't exactly mean that the language is in good shape.


It just means it's in the best shape of any of the languages in it's domain


Can confirm pretty much the entire embedded systems world uses either C or C++.

That's probably most devices in the world.


It used to be C++ would be the last choice for embedded...

Modern C++ with constexpr and friends and the massive work and cunning they have put into avoiding template bloat....

...C++ is now my first choice for embedded.


I have listened to a few podcasts by HFT people. Looks like you try to maximize performance and use a lot of C++ skills. Very interesting to listen to but I wonder how does anyone pick up the skills?


Took me a moment to realize that "killed it" was being used in the negative sense.


Almost a haiku :)


since -14 or -17 I feel no need to keep up with it. thats cool if they add a bunch more stuff, but what I'm using works great now. I only feel some "peer pressure" to signal to other people that I know c++20, but as of now, I've put nothing into it. I think it's best to lag behind a few years (for this language, specifically).


The compilers tend to lag a few years behind the language spec too, especially if you have to support platforms where the toolchains lag latest gcc/clang (Apple / Android / game consoles).

Respectfully, you might want to add at least a few C++20 features into your daily usage?

consteval/constinit guarantees to do what you usually want constexpr to do. Have personally found it great for making lookup tables and reducing the numbers of constants in code (and c++23 expands what can be done in consteval).

Designated initializer is a game-changer for filling structures. No more accidentally populating the wrong value into a structure initializer or writing individual assignments for each value you want to initialize.


You don't have to "keep up with it", if by this you mean what I think you mean.

You don't have to use features. Instead, when you have a (language) problem to solve or something you'd like to have, you look into the features of the language.

Knowing they exist beforehand is better but is the hard part, because "deep" C++ is so hermetic that it is difficult to understand a feature when you have no idea which problem it is trying to solve.


Wrong. Most programmers spend tremendous amounts of time reading and maintaining someone else's code. You absolutely have to keep up with it.


Thankfully "most" C++ code was written before C++11 (good luck with programs that fully utilize "modern" C++'s constructs and their semantics, because at this point only compilers can reliably manipulate them).


I think it's good enough or side projects. More powerful than C so I don't need to hand roll strings and some algos but I tend to keep a minimum number of features because I'm such an amateur.


I mean, right from Bjarne's mouth:

> I used the from_range argument to tell the compiler and a human reader that a range is used, rather than other possible ways of initializing the vector. I would have preferred to use the logically minimal vector{m} but the standards committee decided that requiring from_range would be a help to many.

Oh so I have to remember from_range and can't do the obvious thing? Great. One more thing to distract me from solving the actual problem I'm working on.

What exactly is wrong with the C++ community that blinds them to this sort of thing? I should be able to write performant, low-level code leveraging batteries-included algorithms effortlessly. This is 2025 people.


On the other hand, the decline of robust and high quality software started with the introduction of very immature languages such as both javascript or typescript ecosystems.

It's really any other language other than those two.


Nice work OP.

I’ve done a fair amount of Chinese language segmentation programming - and yeah it’s not easy, especially as you reach for higher levels of accuracy.

You need to put in significant amounts of effort just for less than a few % point increases in accuracy.

For my own tools which focus on speed (and used for finding frequently used words in large bodies of text) I ended up opting for a first longest match algorithm.

It has a relatively high error rate, but it’s acceptable if you’re only looking for the first few hundred frequently used words.

What segmented are you using, or have you developed your own?


Thanks for the kind words!

I'm using Jieba[0] because it hits a nice balance of fast and accurate. But I'm initializing it with a custom dictionary (~800k entries), and have added several layers of heuristic post-segmentation. For example, Jieba tends to split up chengyu into two words, but I've decided they should be displayed as a single word, since chengyu are typically a single entry in dictionaries.

[0] https://github.com/fxsjy/jieba


Great project! It's fascinating how hard segmentation is and how many approaches there are. I thought I'd mention a trick that can let you segment without a backend. When you double click Chinese text in the browser, it will highlight an entire word. For example, try double clicking on the text here: 一步登天:走一步就到天堂美好境地。 It highlights/segments the first 4 characters as a chengyu, and the others as one or two character words. I haven't been able to discover what method Apple and Microsoft use to segment, but it seems to do a good job. You can even use JavaScript's Range.expand() function to do this programmatically. I once even made a little JS library that can run in the background and segment words on a page.


Last I checked, browsers basically wrap ICU's word-break iterator: https://unicode-org.github.io/icu/userguide/boundaryanalysis...


That’s neat!


> How do you people comfortably debug C in Linux ?

I just got comfortable using gdb/lldb from the terminal. Once you get used to it, it's fine (albeit not pretty).


Which visual studio are you using?

It’s been a number of years since I’ve used it but Visual Studio PRO could do all these things - at least as long as I was using it (since visual c++ 5).

VS Code on the other hand is no where near as featured or powerful.


I use VS2022 Enterprise

If you know solutions, I will be very thankful for any info.

P.S.

Note, though I meet all of this problems, probably I don't spent enough time to find a solution (maybe tried first links at google and so on). E.g. tried `strcmp` for breakpoints, tried to write .natstepfilter.

So, if VS really can do all of this, I'm sorry for my hurry.


I don't know for c but for C# you can write a custom expression that get evaluated for the condition.


And with dmypy (included with myoy) it’s even faster


I've found dmypy very underbaked. It's very easy to get it to regularly crash or pin a CPU indefinitely in my codebase.


Yeah it’s far from perfect, but speed is usually not its biggest fault.

I’ll still be switching to the astral offering as soon as it’s production ready.


> I usually name my variable ‘a’.

Me too!


My .exrc file is about 10 lines long and has no plugins.

Combined with ctags and a terminal it’s all I need for the languages I’m familiar with (c, c++, rust, python and several others)


I always wonder, how do you debug which such a setup?


Gdb, lldb or pdb in the terminal.

It took a bit of getting used to at first, but then it just becomes normal.


> kills performance

And battery.

I gave up on alacritty because it was always using the dedicated graphics card of my MacBook and there was no reasonably way to use the integrated graphics card because it was “low performance”.


- Ghostty does vsync by default and supports variable refresh rates (DisplayLink). If you're on battery and macOS wants to slow Ghostty down, it can and we respect it.

- Ghostty picks your integrated GPU over dedicated/external

- Ghostty sets non-focused rendering threads as QoS background to go onto E-cores

- Ghostty slows down rendering significantly if the window is obscured completely (not visible)

No idea if Alacritty does this, I'm not commenting about that. They might! I'm just talking from the Ghostty side.


That's a great approach.

Not sure on the current state of Alacritty, but a few years back the suggested solution for users interested in battery performance was to switch to a different terminal emulator: https://github.com/alacritty/alacritty/issues/3473#issuecomm...


I, a person who don't care about battery performance one iota (because my computer has no battery), love this answer and approach. Not every software is for everyone, and authors drawing a line in the sand like that works out better for everyone in the long-term, instead of software that kind of works OK for everything.


In some cases yes. In this case, in my opinion it can be strictly wrong.

The GPU requirements of a terminal are _minuscule_ even under heavy load. We're not building AAA games here, we're building a thing that draws a text grid. There is no integrated GPU on the planet that wouldn't be able to keep a terminal going at an associated monitor's refresh rate.

From a technical standpoint, there is zero downside whatsoever to always using the integrated GPU (the stance Ghostty takes) and plenty of upside.


Because _my_ computer has no battery. There is a plethora of computers out there with batteries who can run Linux, Windows, and macOS. These computers can, on paper, run Alacritty.

Cherry on top is me being a former user of a MBP 2010 who'd crash when using discrete GPU (it was _the_ reason Apple went with AMD later on). And some apps insisted on using it, even when I disabled it.

I like Rust applications but I don't like this response. The dev sounds worn out; whereas the dev of Ghostty seems to be a pleasure to deal with.


More than happy for software authors to draw a line in the sand - I’ve done that myself too.

I just find myself on the other side of the line for Alacritty.


This is the Alacritty answer to a lot of queries. I took the advice, eventually.


Yep. That comment was when I stopped using Alacritty.


> - Ghostty slows down rendering significantly if the window is obscured completely (not visible)

About this. For whatever reason, I often end up with foreground windows (e.g. Chrome) covering the background window entirely, except for a sliver a few pixels wide.

Would Ghostty handle this case? I don't believe there's any point in full-speed rendering if less than a single line of text is shown, but the window isn't technically obscured completely.


We rely on the OS to tell us when we're obscured, and macOS will only tell us if the window is fully obscured (1 pixel showing is not obscured).


> - Ghostty sets non-focused rendering threads as QoS background to go onto E-cores

Assuming you're referring to Apple Silicon chips, how does Ghostty explicitly pin a thread to an E-core? IIRC there isn't an explicit way to do it, but I may be misremembering.


The QoS influences the core threads are placed on https://developer.apple.com/documentation/apple-silicon/tuni....


You can tell something to run on e-core just not p-cores


can i configure it to run on dedicated? im on desktop, power consumption is not an issue.


I struggle to understand why any of the above approaches would cause any impact worth maintaining a configuration flag over.


Same. ctags all the way!


I'm curious: what language are you working on where ctags are useful?

I spent years fighting with those to try to have a satisfying setup that worked, but found ctags to be high maintenance (always breaking in some way, not context-aware on untyped languages, index getting obsolete quickly and taking forever to update...), and I never looked back since trying coc.vim.


I've used it extensively with c++, rust and python, and to a lesser extent with a handful of other languages

Non context aware is a problem. For python I had to add `--python-kinds=-i` to `~/.ctags` so that it ignores imports when generating the tags.

I've also bound `<leader>t` to `:tn<cr>` and `<leader>r` to `:tp<cr>` so I can easily go back and forwards between tags if there is more than one match.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: