Hacker Newsnew | past | comments | ask | show | jobs | submit | bonquesha99's commentslogin

If you're interested in the Ruby language too, check out this PoC gem for an "operator-less" syntax for pipe operations using regular blocks/expressions like every other Ruby DSL.

https://github.com/lendinghome/pipe_operator#-pipe_operator

  "https://api.github.com/repos/ruby/ruby".pipe do
    URI.parse
    Net::HTTP.get
    JSON.parse.fetch("stargazers_count")
    yield_self { |n| "Ruby has #{n} stars" }
    Kernel.puts
  end
  #=> Ruby has 15120 stars

  [9, 64].map(&Math.pipe.sqrt)           #=> [3.0, 8.0]
  [9, 64].map(&Math.pipe.sqrt.to_i.to_s) #=> ["3", "8"]


It's an interesting experiment but standard Ruby is expressive enough.

[9, 64].map { Math.sqrt(_1) } #=> [3.0, 8.0]

For the first example I would just define a method that uses local variables. They're local so it's not polluting context.


Check out this proof of concept gem to perform pipe operations in Ruby using block expressions:

https://github.com/lendinghome/pipe_operator#-pipe_operator

  "https://api.github.com/repos/ruby/ruby".pipe do
    URI.parse
    Net::HTTP.get
    JSON.parse.fetch("stargazers_count")
    yield_self { |n| "Ruby has #{n} stars" }
    Kernel.puts
  end
  #=> Ruby has 15120 stars


nmap ; :


I go with nnoremap but yeah, this is the absolute first edit to my .vimrc


Yes, that's so relaxing for little finger. In fact my minimal vi config is

  :nnoremap : ;
  :nnoremap ; :
  :nnoremap Y y$
  :vnoremap : ;
  :vnoremap ; :
  :vnoremap Y y$


How can you live without f/t followed by ;?


As a Kafka alternative, has anyone attempted to use PostgreSQL logical replication with table partitioning for async service communication?

Proof of concept (with diagrams in the comments): https://gist.github.com/shuber/8e53d42d0de40e90edaf4fb182b59...

Services would commit messages to their own databases along with the rest of their data (with the same transactional guarantees) and then messages are "realtime" replicated (with all of its features and guarantees) to the receiving service's database where their workers (e.g. https://github.com/que-rb/que, skip locked polling, etc) are waiting to respond by inserting messages into their database to be replicated back.

Throw in a trigger to automatically acknowledge/cleanup/notify messages and I think we've got something that resembles a queue? Maybe make that same trigger match incoming messages against a "routes" table (based on message type, certain JSON schemas in the payload, etc) and write matches to the que-rb jobs table instead for some kind of distributed/replicated work queue hybrid?

I'm looking to poke holes in this concept before sinking anymore time exploring the idea. Any feedback/warnings/concerns would be much appreciated, thanks for your time!

Other discussions:

* https://old.reddit.com/r/PostgreSQL/comments/gkdp6p/logical_...

* https://dba.stackexchange.com/questions/267266/postgresql-lo...

* https://www.postgresql.org/message-id/CAM8f5Mi1Ftj%2B48PZxN1...


Sounds a bit like change data capture (CDC), e.g. via Debezium [1] which is based on logical decoding for Postgres, sending change events to consumers via Kafka? In particular when using explicit events for downstream consumers using the outbox pattern [2]. Disclaimer: I work on Debezium.

[1] debezium.io/ [2] https://debezium.io/documentation/reference/configuration/ou...


I think you'll find Debezium (http://debezium.io) interesting. Specially it's embedded mode where it's not coupled to Kafka.

Also,do look at the Outbox Pattern. It's basically what you are describing.


Just as Kafka isn't a database, PostgreSQL isn't a queue/broker. You can use it that way, but you'll spend a lot of time tweaking it, and I suspect you'll find it's too slow for non-trivial workloads.


Skype at its earlier peak used Postgres as a queue at huge scale. PGQueue. It had a few tweaks and, sure, it is an anti-pattern but it can work. It is sure handy if you are already using postgres and want to maintain a small stack.


I believe you will enjoy this:

https://medium.com/revolut/recording-more-events-but-where-w...

Not exactly the same architecture you are proposing, and quite complex tbh, but it is working for them.


interesting idea: I think one issue is that the write throughout of a single master Instance is very limited.

But if working set fit in memory on the instances processing the writes, you could use PostgreSQL logical replication to update materialized view on other server easily.

but then it start looking like Amazon aurora or Datomic database.


Definitely agree regarding the strange semantics recently like the pipeline[0] or `.:` method reference[1] operators. I do think those kind of functional features would be handy if they behaved correctly and had a more Ruby-like syntax instead of introducing foreign looking operators.

Luckily this new pattern matching feature appears to behave like expected and feels pretty natural - just a new `in` keyword and some variable binding syntax to familiarize ourselves with (which we already kind of use with statements like `rescue`).

Awhile back we experimented with an alternative Ruby pipe operator proof of concept[2] that is "operator-less" and looks just like regular old Ruby blocks with method calls inside of it. Maybe there's still a chance for something like this now that those other implementations have been reverted!

  # regular inverted method calls
  JSON.parse(Net::HTTP.get(URI.parse(url)))

  # could be written left to right
  url.pipe { URI.parse; Net::HTTP.get; JSON.parse }

  # or top to bottom for even more clarity
  url.pipe do
    URI.parse
    Net::HTTP.get
    JSON.parse
  end
[0] Revert pipeline operator: http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-core/...

[1] Revert method reference operator: https://bugs.ruby-lang.org/issues/16275

[2] Experimental "operator-less" Ruby pipe operator proof of concept: https://github.com/lendinghome/pipe_operator


LendingHome | Offices in San Francisco and Pittsburgh | REMOTE friendly

Tech: AWS, Docker, GraphQL, JavaScript/TypeScript/Node.js, Lambda, OCR (tesseract), PostgreSQL, Python, React, Redis, Ruby on Rails

tldr: We're automating the loan origination/underwriting/servicing/investing/etc process

LendingHome is reimagining the mortgage process from the ground up by combining innovative technology with an experienced team. Our goal is to create a seamless, transparent process that transforms and automates the mortgage process from end to end. We've raised $167MM in venture capital with a team of over 300 people and have been featured on the Forbes Fintech 50 list for two years running! LendingHome is uniquely positioned to become the next great financial services brand powered by the most advanced mortgage platform in the world.

Open positions:

  * Engineering Manager
  * Senior/Staff/Principal Data Scientist
  * Senior/Staff/Principal Software Engineer
  * Design/Finance/HR/Marketing/Operations/Product/Sales/etc
Please check out our openings for more details! https://grnh.se/18ad65801


Sometimes it's nice to step away for some perspective and start again fresh somewhere new!

People do actually read these comments so please give it a shot (even with a throwaway) just to see what's out there at least:

Ask HN: Who wants to be hired? October 2019 https://news.ycombinator.com/item?id=21126012

What do you have to lose, you want to be fired anyways!


LendingHome | Offices in San Francisco and Pittsburgh | REMOTE friendly

Tech: AWS, Docker, GraphQL, JavaScript/TypeScript/Node.js, Lambda, OCR (tesseract), PostgreSQL, Python, React, Redis, Ruby on Rails

tldr: We're automating the loan origination process (applying, underwriting, servicing, investing, etc)

LendingHome is reimagining the mortgage process from the ground up by combining innovative technology with an experienced team. Our goal is to create a seamless, transparent process that transforms and automates the mortgage process from end to end. We've raised $167MM in venture capital with a team of over 300 people and have been featured on the Forbes Fintech 50 list for two years running! LendingHome is uniquely positioned to become the next great financial services brand powered by the most advanced mortgage platform in the world.

Open positions:

  * Engineering Manager
  * Senior/Staff/Principal Data Scientist
  * Senior/Staff/Principal Software Engineer
  * Design/Finance/HR/Marketing/Operations/Product/Sales/etc
Please check out our openings for more details! https://grnh.se/18ad65801


LendingHome | Offices in San Francisco and Pittsburgh | Remote friendly

Tech: AWS, CloudFormation, Docker, GraphQL, JavaScript/TypeScript, OCR (tesseract), PostgreSQL, Python, React, Redis, Ruby on Rails

LendingHome is reimagining the mortgage process from the ground up by combining innovative technology with an experienced team. Our goal is to create a seamless, transparent process that transforms and automates the mortgage process from end to end. We've raised $167MM in venture capital with a team of over 300 people and have been featured on the Forbes Fintech 50 list for two years running! LendingHome is uniquely positioned to become the next great financial services brand powered by the most advanced mortgage platform in the world.

Open positions:

  * Engineering Manager
  * Senior Data Scientist
  * Software Engineer (Frontend or Backend)
  * Design/Finance/HR/Marketing/Operations/Product/Sales/etc
Check out our job openings and apply at: https://grnh.se/18ad65801


Elixir/Unix style pipe operations in Ruby! https://github.com/LendingHome/pipe_operator


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: