Hacker News new | past | comments | ask | show | jobs | submit | Agingcoder's comments login

It depends on where you are - many large banks have their derivatives pricing libraries written in c++ or c#.

Yes you need it, and no it’s not trivial. Not all quants need it on a daily basis though.

This is not entirely true - they’ve invested quite a bit in maintaining backwards compatibility at least hardware side through various emulation or translation layers : first during the ppc/x86 migration then more recently with the x86 to arm shift.

Agreed. The difference is stunning.

Where did you find this ? This is just plain wrong.

Airparif is a collection of people involving regular people, the local cities, major polluters etc. https://www.airparif.fr/airparif/missions-dairparif ( unfortunately in French)

It’s funded 24% by the state, 24% by cities, 27% by large companies including the polluting ones,20% by selling what they produce/know how etc.


"Airparif has raised an undisclosed amount of funding from 1 Grant (prize money) round on Jun 01, 2014 from European Union."

Source: https://tracxn.com/d/companies/airparif/__7FZ-JDeTGVeYNdb_2-...


Who, exactly, should be funding air quality measurements within the European Union?

This is certainly not very common in Paris. Cyclists behave extremely badly, but don’t ride on the sidewalk. E-scooters were another story though, and were indeed banned for this reason.

They don't do it often, but I do see it happen once or twice a day when I'm out. Usually it's only for a short distance though.

Sure there are cars.

In my ( very personal, more than 20 years living here ) experience, it’s a completely different city, and there definitely are fewer cars than before.

Now if car exhausts are better and both effects compound I won’t complain !


Mismanagement is (very) well documented but corruption is a very serious accusation. Do you have examples ? ( that don’t go back to Jacques Chirac )

I would suggest to look into the various whimsical subsidies of the city hall, the very expensive procurement of useless products or the ruinous public housing policy, that all serve to help the friends of the mayor and the constellation of fake associations that support her. All has been also documented, and has been mentioned repeatedly by the press.

Try small businesses, large corps which don’t sell IT products, or even universities / government / etc. Avoid startups / competitive places.

You probably want to address your anxiety as well - where it comes from, how it affects you, can you do something about it, if you can’t how you can deal with it, etc

I’m supposed to be a high achiever professionally speaking ( by most metrics I’ve done well enough at least good enough for me), but still have to keep my natural anxiety in check. Surprisingly enough I’ve eventually figured out that when I’m stressed out there’s a reason, and that I can actually find it if I search for it. If I don’t look for it then I just end up completely paralyzed. The solution usually boils down to addressing the thing I’m afraid of.

Changing careers is another option, but there’s no reason why your anxiety won’t come back so I’d work on the anxiety first.

I wish you the best !


Right now is probably a difficult time for finding many university and government jobs, due to the political turmoil and disruptions to federal funding. It's likely there are some real job postings among all the layoffs and hiring freezes, but it may not be a low stress thing to pursue...


Indeed - I don’t work in the US so hadn’t considered that.


There’s also ‘don’t do it’ ( just turn the whole thing off because it’s actually not needed ) and ‘use a better middleman’ ( os, compiler etc while not going lower level). In my experience these are very cheap and usually not looked at and most devs jump at the other 5.

I’ve lost my voice for a few days because I spent a solid eight hours in a row talking developers out of solving already-solved problems.

“No, you don’t need Redis for caching output, we have a CDN already.”

“No, you don’t need to invent authentication tokens yourself, JWT is a downloadable package away.”

“No…”

“No…”


Same. I have a team lead who's trying to externalize a SQL query to new infrastructure. Why can't we do a database call on the current machine? why does this have to be a task for an external machine to carry out???

It's like the current machine can't wait on the database, we have to wait on an external machine that waits on the database. Why why why? I feel web developers are so used to these patterns that they lose common sense.

I'm sure tons of people will agree with the team lead. I think a lot of people just don't get it.


Oh god, this.

The best part is that by externalising querying, it becomes fiddly to support every possible query shape… so…

… inevitably:

    public object GetSQL( string query ) …
SQL injection is the inevitable end result of this design pattern.

Because they get tired of babysitting two sets of PRs and deployment pipelines for every feature and some of the bug fixes. So I’ll just make the service “generic” to stop having to keep changing the service. Brilliant!

Pain is information, but some people take the wrong actions based on that information.


This was in a single repo, but the problem is the same: There is an impedence mismatch between SQL queries and typical RPC APIs such that it becomes decidedly non-trivial to represent every possible arbitrary query "shape" that might occur in a report.

Picture the kind of queries that might be generated by something like an embedded Crystal Report or a Power BI report web control. You can get all sorts of group-by operators, top-n sorts, sums, min/max/avg, etc...


To be fair, JWT is abysmal in more than one aspect and there's reasonable chance you'd be better off on your own.

It's like "No, you don't need to write a configuration parser, you can just serialize an object" which is a well travelled road straight to Dante's seventh circle.


> reasonable chance you'd be better off on your own.

Sure, yes, if you know what you're doing and have a valid reasons for going off the beaten path.

In this case I was trying to explain to a developer why their token was horrifically insecure despite superficial appearances.

This was an off-the-cuff sort of thing: I was simply scrolling through some code talking about something else and I saw "encrypt" in the middle of it... and sigh...

I talked for well over an hour just listing the security vulnerabilities in just three lines of code.

"Static keys used in encryption can be cracked by obtaining many ciphertext samples. You actually wanted keyed signing, not encryption."

"The scope isn't included and you're sharing the same key everywhere. That means dev/tst/prd tokens are interchangeable."

"Also... let's just keyword search for this 'random' (not) key in the Git repos... and oh... ohh... it's everywhere. Oh god. Unrelated apps too!"

"There's no dates in there. That means tokens last forever."

"The name is separated from the role with an ASCII symbol that's a valid user-name character. So if I call myself Bobby Tables#Admin, then I am. Fun!"

"Tokens last for a long time and can only be encrypted and decrypted with a single key, which means you can't rotate keys without logging out all users all at once."

Etc...

Hence losing my voice for a bit.

Also, a little bit of my sanity.


> Static keys used in encryption can be cracked by obtaining many ciphertext samples.

They can? My understanding was that with a well-designed symmetric algorithm and a key with enough entropy this shouldn't be possible.


Many (most?) symmetric algorithms simply generate a stream of pseudo-random bits, which are then used to XOR the plaintext. This stream is deterministic, which means that the same key "in" will produce an identical bit-stream "out".

This means that if the same key is reused, and the plaintext is predictable, then you'll see patterns in the output. Those patterns can be used to figure out the key. Heck, you don't even need the key! Just register a bunch of distinct user names (or whatever you can control), and observe the change in the encrypted token. Eventually you can collect enough data to generate arbitrary tokens, or at least tokens with some useful values under your control. You can also exploit different error response codes, which is a common design fault because "decryption failure", "parser failure", and "access denied" tend to go through different code-paths and throw different exception types.

The solution is to mix in a block of random bits to break this determinism. This is the initialization vector (IV). The developers -- of course -- failed to do this and used a constant.

Even that is insufficient, because most encryption algorithms provide only confidentiality. They don't provide authenticity ("signing"), which is more important for tokens.

(An aside: authenticated encryption algorithms are starting to get popular, and these provide both at once efficiently.)

Essentially, it doesn't matter how "secure" an algorithm is, it won't achieve security if it is misused and applied for the wrong purpose.


> This is the initialization vector (IV). The developers -- of course -- failed to do this and used a constant

Oh, at that point you're not even using the algorithm anymore. Why is this even possible in the library? I would've assumed that any sensible implementation would handle the initialization vector for you, and manually setting it up would require very verbose explicit configuration.


Also buy a bigger box or better IO or both. A lot of issues are solved by scaling something up. Its not more optimal but it does solve the performance problem and is often cheaper than months of tracing performance and development work.

Changing the data structure can in some cases also be a “don’t do it. “

I advised another engineer to write the code to generate error pages for the sites we host the same way I pre-generated stylesheets. He did not, and ended up making almost 50k service requests across four services to my 4500 across two. And then it broke after he quit because it tripped circuit breakers by making FIFTY THOUSAND requests. The people paying attention to capacity planning didn’t size for anomalous traffic.

He was trying to find a specific piece of customer data and didn’t notice it was in the main service and only needed to be parsed. So I didn’t just rewrite it to work like mine, I extracted code from mine to call from both and that was that. Went from an estimated 90 minutes to run it per deployment (I got exasperated trying to time a full run that never completed and instead I logged progress reports to figure out how many customers per minute it was doing) down to less than 5:00. And by not thrashing services that were being called by user-facing apps.

If that data hadn’t been in the original service I would have petitioned to have it so. You shouldn’t have to chain three or four queries to find a single column of data.


Also “do it in advance (and reuse)”

People act like I have two heads when I start drilling down on the expected read vs write rate on the application. Because when there are orders of magnitude difference between the two it shifts when you make expensive calculations. Either on the infrequent write or lazily on the infrequent read.

And if I’m honest I kind of wish I had two heads in this situation so I could bite them all. Fucking amateur hour.


And batching

Batching and throttling. If you’re doing work with a loose deadline don’t make it outcompete work with a tight deadline. That’s a self-DOS. Make the batch no more than 1/10th to 1/8th of the overall traffic and things go more smoothly.

Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: