Hacker Newsnew | past | comments | ask | show | jobs | submit | roozbeh18's commentslogin

You are just reducing the blast radius with use of podman; you will likely need secrets for your app to work, which will be exposed regardless of the podman approach.


Most people don’t have NPM keys in their application containers.


If you're developing in a container then you would have to be doing it without doing something like say, mounting your home directory into it.

The reality here is this is the sort of attack SELinux should be good at stopping (it's not because no one uses SELinux, the policies most commonly used don't confine the user profile in a useful way, and a whole bunch of tools love ambient credentials in environment variables).


Just checked my single node Kafka setup which currently handles 695.27k e/s (average daily) into elasticsearch without breaking a sweat. kafka has been the only stable thing in this whole setup.

zeek -> kafka -> logstash -> elastic


how is node failure handled? is this using KRaft or ZK?


out of curiosity, what does your service do that it handles almost 700K events/sec?


Someone wrote a PHP7 script to generate some of our daily reports a while back that nobody wants to touch. Docker happily runs the PHP7 code in the container and generates the reports on any system. its portable, and it doesnt require upkeep.


someone at the leadership is also thinking how he/she can lower head count by removing the agent master


20 years ago the school class enrollment website allowed just that by changing account IDs in URL, we were bypassing the priority enrollment. I had fun adding my friends and I to classes we wanted.


I took a slightly different approach and simply wrote a script that checked availability every minute, and then sent me a text message alert when a seat opened up.

(Upperclassmen often switched their schedules around after the priority enrollment deadline ended)

Not as bullet proof as your approach!


Incredible, my university class reg system had un-sanitized input for the class search field so if you knew the SQL you could find exactly how full a class was and dump the whole table of classes without needing to wait for your reg to open.

And pretty sure you could insert your student ID into the class that way too :)


Heck you could probably just kick people out of the class that you didn't want to take it with.


That’s useful. But 30 years ago you could iterate Social Security Numbers.


we have not had a plane blow up into buildings since, so something must be working. yes, it's also inconvenient.


That's specious reasoning. I have a rock that keeps tiger away. How does it work? It doesn't, it's just a stupid rock. But I don't see any tigers around here, do you?


Yea they added flight deck doors that you can’t get through and a policy that the doors stay closed through a flight. The TSA did nothing to help, the planes are a hard target now.


Or the threat has subsided. Not saying get rid of security, but I think it’s okay to allow water bottles again.


> we have not had a plane blow up into buildings since, so something must be working

Yeah, 10 minutes after the event every person on earth now knows the best strategy is to rush the cockpit, not sit calmly and wait for the hijackers to ask for a ransom, which was the previous norm.

Between that and reinforced concrete doors flying planes into buildings is no longer a viable strategy.


Reinforced cockpit doors have backfired in pilot murder suicides on Germanwings 9525, LAM470, MH370, CES5735. It’s not the silver bullet..


Can someone tell me how this is collected in SQLite


I wrote a blog post a while back about reading these dumps: https://search.feep.dev/blog/post/2021-09-04-stackexchange

Presumably they have a script that does something similar to that process, and then writes the resulting data into a predefined table structure.


Nice post!

Yep, my process is similar. It goes...

  - decompress (users|posts)  
  - split into batches of 10,000  
  - xsltproc the batch into sql statements  
  - pipe the batches of statements into sqlite in parallel using flocks for coordination
On my M1 Max it takes about 40 minutes for the whole network. Then I compress each database with brotli which takes about 5 hours.


use google takeout.


I'd guess that the emphasis in OP's reply was on 'actually work'.


In case it's useful, here's the solution that worked for me the last time I tried it:

1. Takeout to Google Drive :grimace:

2. rclone from Google Drive

But I do this around once a year, and the last few times there are new roadblocks every time.


I made the same bet with cloudways which is now owned by digitalocean. they filled a gap for me, and I was ok if they decided to close shop; I am glad it didn't go that direction, and they are part of a bigger company that also was once a small company, but they are now publicly traded. you make your bets...


haha, that was questionable for me as well. It's ok for Grammarly to read your stuff, but crash metadata is a no no.


Welcome to security in 2023 :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: