Hacker Newsnew | past | comments | ask | show | jobs | submit | AdamMeghji's commentslogin

https://sampler.meiji.industries/

I built a TUI sampler which cherry-picks my favourite features from modern & vintage hardware samplers, DAWs, plugins, outboard FX gear, and DJ equipment.

If you know what an AKAI MPC Live, MPC 3000, SP404, SP1200, BOSS RC-202, Alesis 3630, Serato Sample, S950 filters, and stem separation does, then you'll love seeing these "greatest hits" up in a terminal interface.

Last year while on vacation in Costa Rica, I started scratching my own itch for locating and organizing samples, which quickly evolved into adding more and more features while keeping it tactile and immediate. It was too fun to stop so I kept going. After a few days I was happily making beats in it, and since then it's only gotten better.

It's live and totally free to use, and works in macos & Linux (Windows soon). I'm about to launch v1.0 now, just working with folks in the community to round out the Factory Kits a little more for users new to beatmaking.

Turns out, making beats with no mouse and a terminal interface strikes the perfect balance of hardware feel and software power, and I'm loving the result. Been sharing it with folks in my beatmaking sphere and have plans to continue expanding its reach through more collaborations, contests, and in-person events.

Hope it brings you as much joy as it does to me :)


I love working on the terminal and this looks great, but can I ask why do you require to sign in? To be that's a bit of a downer :(


totally fair point! the medium/long-term roadmap is to layer in a BBS-esque community, so identity is required. more on this later! regardless, might be nice to have an unauthenticated / "Skip sign in" flow for the folks who just want to use the tool.


Was this previously open source? There's a broken link to a repo at the bottom of the marketing page that results in a 404.


it was not, so that's a mistake that's been fixed, thanks!


Adam, this is amazing!

10/10

It does what is says it does, zero friction.

Beautiful UI, I make some music from time to time, this is something I will definitely use.


This looks really cool. Bookmarked. I hope I can make some time to play with it. Nice work.


TUI triggered me, so I had to take a look... It appears the GitHub link on your landing page is 404?


Looks very promising! Unfortunately Docs and GitHub repo don‘t work


apologies, those broken links are misleading (docs coming soon, but not open source atm). Removed.


Is this based on Bubbletea TUI framework? The waveforms are pretty cool!


it's using ratatui and built in Rust :)

Great intro video!


This is sick, great job!


why are the docs and github gone??


Very cool!


here goes my Monday


Congrats on the launch! I've spent some time recently with great success speeding up CI for my teams via alternate actions runners, and the increase in efficiency that comes with dramatic reductions in build times is worth it. When the cost is the same (or less), it's an absolute no-brainer.

How do you differentiate from BuildJet, which takes a similar approach?


We've had a few customers migrate over from BuildJet because WarpBuild is in active development. For instance, we are adding support for macos runners in Jan.

Our mission is broader than just fast runners - it's about better CI dev ex. This includes surfacing recommendations that would optimize build times, insights into the critical paths of workflows and more.

We're also investing in tooling to overcome issues that currently exist, such as an action to ssh into running workflows for easy debugging.


Awesome. Do docker image layers persist across build runs? Github, BuildJet, etc. use ephemeral runners, so subsequent runs have to re-pull everything from scratch, which is where most of my actions' time is spent now. If you're able to persist these across runs, that'd be a reason to switch alone.


We have this with https://depot.dev out of the box. You connect to a native BuildKit and run your Docker image build on native Intel and Arm CPUs with fast persistent SSD cache orchestrated across builds. It’s immediately there on the next build without having to save/load it over the network.


Not yet, but coming soon (~2 weeks)


This will, by itself, immediately sell me. I’ve spent countless hours and lots of deep deep reading trying to get satisfactory results on GitHub Actions, with no success. From what I’ve seen, plenty of other people are in the same boat.


I'll keep hn posted!


I migrated from BuildJet this week because BuildJet’s caching is broken. Installing cached pnpm dependencies takes about 12s on GitHub and WarpBuild runners. It takes 2m on BuildJet, which is about half the runtime, effectively negating the cost savings of BuildJet over GitHub.

I reported this issue to BuildJet over a week ago and haven’t received any response.


Exactly my experience as well: https://x.com/crohr/status/1732442731715113374

In the tests with my GitHub Action [1] that spawns ephemeral runners for any workflow, I found BuildJet bandwidth speed 10 to 20 times slower than machines at AWS.

[1]: https://github.com/runs-on/action


I’m currently evaluating Buildjet. I’m curious about this caching issue. Were you using actions/cache or buildjet/cache?

https://buildjet.com/for-github-actions/docs/guides/migratin...


We used BuildJet cache for months. It’s possible it was always broken and I only noticed a few days ago. I tried both and neither actually cached data. I even tried forking, and upgrading, the BuildJet variant to no avail.

I spent a solid couple hours trying to fix this before moving to WarpBuild.


You can continue to use actions/cache if using WarpBuild :)


Thanks for your trust! I'm here to ensure you have a good experience with WarpBuild and for feedback/requests.


Electric Unicycle. For short trips and errands, it's a super fast and effective way to zip around my area. For more recreational trips, off-roading on local trails and ravines is super fun. It feels like snowboarding around the city, and has a learning curve of a few days (like skiing). Initially, I borrowed an old one from a friend and decided to stick with it by getting my own (a fairly good one). Next up was the various safety equipment. You could spend anywhere from $1500-4000 on the unit + safety gear depending on how fancy you want to go. Even a minimal base model that's a few years old would be totally fine.


Yeah, I got a powerful escooter and have really enjoyed it. It's fun zipping through town, and it makes it much faster to get to some places since I can go through parks/campuses no problem.

A couple safety tips though, for anyone considering an escooter versus ebike. On an escooter, the wheels have a very small diameter, which means that they are much less stable when you go over bumps. If you're going 15 MPH it's not a huge deal, but if you're going much faster than that things can get dicey.

On an escooter, the thrust that pulls you is coming from your hands (not seat/feet), and that's where the throttle is. As a result, if you hit a bump or divot, your hand will often end up jerking on the accelerator by accident. This can cause problems. As I have ridden more, I've learned to try to only have my hand on the throttle when I'm on obviously clear pavement, and disengage when going over any sort of bump or pavement transition. It makes things much safer.

If I had to do it again, I might get an ebike. They're much more expensive, and I'd feel like it's lazy to choose that instead of my regular bike to go most places. But it might be safer than an escooter (though the latter is more fun IMO!).


A big motivator for going the EUC route was a) the wheel is a huge 18" tire and 3" wide, so small potholes and roots/branches are no problem, thus safer, and b) having a EUC w a suspension means I'm less likely to be bounced off, even less so if you add some "power pads" that support the upper foot area. Check it out!


OneWheel feels more like snowboarding/surfing, EUC feels perhaps closer to skiing?


Peggy | Principal Mobile Developer | REMOTE | Full Time | https://peggy.com

Peggy is the social marketplace that allows you to discover, buy, and sell art. We partner with art galleries and artists to onboard their artworks, register a secure digital fingerprint for each artwork, and connect them to our vibrant collector community. Our art fingerprinting tech also enables the ability for collector-to-collector resale, which creates liquidity that was previously only accessible to billionaires selling Blue Chip art via auction houses. Best of all, we do right by our emerging, contemporary, and living artists by paying 5% artist royalties on each resale transaction.

We're hiring a Principal Mobile Developer for our iOS and Android (flutter/dart) app. You will be partnering with our design, backend, and technical leadership teams as we drive towards our exciting alpha release. You possess expert-level mobile experience in any of: Flutter/Swift/Kotlin/Java/React Native. If you're a seasoned mobile developer without Flutter experience yet, no worries, we've found that experienced polyglots pick it up very quickly.

Peggy is a remote-first company. We are building the company this way so we can curate a team with the best talent from anywhere in the world, and to create flexibility for team members. Team members currently/recently working from: Toronto, Vancouver, CDMX, Brazil, Argentina, London, India, Sweden, Lisbon, Madrid, NYC.

https://peggy.com/careers


Peggy | Senior Back End Developer - Ruby/Rails | Full-Time | REMOTE: US, Canada, Global*

Peggy partners with galleries and artists to make living with and investing in art accessible to all. Finally you can buy incredible art at a price you can afford, and can later sell to fellow collectors if your tastes change or you seek to rebalance your art investments. Core to our values, Peggy does good by the artists themselves, who can finally earn artist royalties for all future secondary sales as well - something they miss out on today. This is all made possible using sophisticated and proprietary AI and ML which keeps buyers and sellers protected without the need for the antiquated and opaque Auction House model.

As a key member of the back end team, you will be responsible for providing an elegant and well-documented GraphQL API as an entry-point into Peggy’s content, data, and business logic, which is crafted in Ruby, Rails and Postgres. You will be supported by a DevOps infrastructure, which leverages AWS, Terraform, CI/CD, CloudWatch, Auth0, Sentry, and other developer-friendly tools and systems.

Peggy is a remote-first company. We are building the company this way so we can curate a team with the best talent from anywhere in the world, and to create flexibility for team members. Team members (incl. ex-Shopify, Uber, Ada) currently/recently working from: Toronto, Vancouver, CDMX, Brazil, Argentina, London, India, Sweden, Lisbon, Madrid, NYC.

More @ https://peggy.com/careers?ashby_jid=bcf449d8-38d0-49d7-aca6-...

*: Global timezone preference within [ GMT-8..GMT+2 ]


Is this possible in MacOS at all? I have an RX100 V and an Elgato Camlink HD, but would love to use that capture card w another cam, and use the RX100 over USB simultaneously.


It works for Mac

  brew install gphoto2
  brew install ffmpeg --with-ffplay
  gphoto2 --abilities
  # Abilities for camera             : Sony Alpha-A6300 (Control)
  # ...
  gphoto2 --stdout --capture-movie | ffmpeg -i - -vcodec rawvideo -pix_fmt yuv420p -threads 0  -f matroska - | ffplay -
I'm piping it to ffplay,so this will at least let you test your camera or you could also use it in OBS as a window source. Also, make sure your cameras usb mode is not set to "mass storage" but to a “Remote Camera Control”.


Thanks for the tip, really appreciate the actual commands. I'm wondering if anyone else is running into this:

    $ brew install ffmpeg --with-ffplay
    Usage: brew install [options] formula

    # Install flags here, nothing about --with.
    Error: invalid option: --with-ffplay


Your right, they removed that option, https://formulae.brew.sh/formula/ffmpeg. I guess ffplay is built with it by default now.


can you try piping that to gstreamer?


Sure, not sure what gstreamer plugin/sink would create a loopback device, but this plays as well.

   gphoto2 --stdout --capture-movie | ffmpeg -i - -vcodec rawvideo -pix_fmt yuv420p -threads 0  -f matroska - | gst-launch-1.0 fdsrc fd=0 ! decodebin !  videoconvert ! videoscale ! autovideosink


per this - https://apple.stackexchange.com/a/356362

it should be,

  osxvideosink

I wonder if this works,

  gphoto2 --stdout --capture-movie | ffmpeg -i - -vcodec rawvideo -pix_fmt yuv420p -threads 0  -f matroska - | gst-launch-1.0 fdsrc fd=0 ! decodebin !  videoconvert ! videoscale ! osxvideosink


Looks like both gphoto2 and ffmepg are available on homebrew, worth giving it a shot for sure. FWIW - I did end up building my own ffmpeg because debian default didn't have NVIDIA support. Want me to give it a try?


I'd love it if you gave it a try! I've already spent enough hours re-compiling ffmpeg to get nvenc support :)


Spoke too soon, v4l2loopback-utils is the missing piece on mac. Found this with a basic search[0], if anyone enterprising enough wants to take a crack at it

https://apple.stackexchange.com/questions/353168/how-can-i-c...


Yes please! Would love to use my X-T30 as a webcam on my mac.


Cascable Pro Webcam claims to support the RX100 V:

https://cascable.se/pro-webcam/

Compatibility table:

https://cascable.se/help/compatibility/


Here's a simple gui app[0] that creates a syphon stream, and you can use that from an app called camera twist or possibly from OBS as well.

[0] https://github.com/v002/v002-Camera-Live


The combination of this HTTP Load Balancing announcement, and May 23rd's announcement of CoreOS support are super exciting. I've been working on setting up a similar docker-based multi-AZ autoscaling setup on EC2, but getting it up and running has required integrating a lot of separately moving parts. If Google can simplify this process via a few simple gcutil commands and a cloud-config YAML file, it would be a hugely compelling offering.


Install the tools you need via the Dockerfile (iostat, top, atop, etc.), and an sshd. Then, instead of running the single web process, run supervisord via CMD, which will subsequently launch both your web app process, AND sshd. From there, EXPOSE 80 22, and you can SSH into the container to run any perf analysis tools as usual.

EXPOSE 80 22; CMD ["/usr/bin/supervisord"]


We're using pacer-neo4j (https://github.com/pangloss/pacer-neo4j) at Uniiverse with great success embedding neo4j within a jruby 1.7 webapp. I'd definitely suggest checking it out as well!


whoa... how have i not seen this? this is cool! glad to see theres more neo4jruby love out there!

checkout: https://github.com/karabijavad/recommendations_example

for an example of using cadet with sinatra


I also used this brief window of opportunity as a fun experiment in learning Ansible. As a result, I've published a repo which automates setting up LTC mining on g2.2xlarge instances.

http://github.com/adammeghji/ansible-ltc-mining-on-ec2

If anybody's curious to spin up a GPU instance and dabble with this, this greatly facilitates downloading, compiling, and installing the CUDA drivers, and setting up the LTC miners. Amazon has a $100 credit available too, if the current spot instance prices are prohibitive.


I used your script to help me figure out how to dig for DogeCoin. Ultimately I wound up doing everything on Screen. Did 1, 3, 5, and finally 1 instances, a few hours at a time. I've stopped mining for now, going to wait until I set up my own rig and maybe create a new AltCoin. https://gist.github.com/benatkin/7868889


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: