Hacker News new | past | comments | ask | show | jobs | submit | quantumduck's comments login

Not the future, it did already happen, albeit on a smaller scale with Cruise: https://www.thedrive.com/news/a-swarm-of-self-driving-cruise...

The worst part is they were never really transparent about what the issue was.


I don't know if VBB used this, but most transit agencies publish their data in two standard formats these days GTFS for static schedules and GTFS-real time for real-time data. Any application you build around these formats would immediately scale to pretty much every big city.

Google maps and Apple maps provide transit directions in their apps using GTFS and GTFS real time data (partly the reason why Apple maps was able to add transit directions feature so easily - Google had to deal with the transit agencies years before that and convinced them to publish data in open source standard formats).


VBB has been publishing GTFS (Static/Schedule) data for almost 10 years now. [1][2]

But there is no (truly open) realtime data (e.g. GTFS Realtime or SIRI) available because their API [3] - requires signing a draconian contract (e.g. ridiculous liability clauses, no permission to pass the data on in any form), and - API works individual vehicles/trips, so you'd have to poll every single one out there to get the equivalent of a GTFS-RT dataset.

There is an unofficial API though [4][5] that is de-facto open, and I have built a tool that pools the data and creates a GTFS-RT feed. [6]

[1] https://daten.berlin.de/datensaetze/vbb-fahrplandaten-gtfs [2] https://www.golem.de/news/open-data-verkehrsverbund-berlin-b... [3] https://www.vbb.de/vbb-services/api-open-data/api/ [4] https://github.com/public-transport/hafas-client/tree/5.25.0... [5] https://github.com/public-transport/transport-apis/blob/8e05... [6] https://github.com/derhuerst/berlin-gtfs-rt-server


It's also a popular meta-learning algorithm: ALPaCA: https://arxiv.org/abs/1807.08912

Well no one cares unless one of the parties has enough money and interest to hire a lawyer.


I wonder if that actually is close enough to have a case.


Any dictionary word is untrademarkable. You can use it, but you can’t pevent others from using it as well.


That is not true. Easiest example is Apple.


You can find more companies with “Apple” in their names & logos here: https://www.uspto.gov/trademarks/search

https://www.quora.com/How-does-Apple-Computer-Inc-get-away-w...


Your second link validates my statement. You can absolutely trademark generic terms. There are limitations though.

Try to create a smartphone or computer company called Apple and see what'll happen. You can't trademark the word Apple for your apple farm because other apple farms must be allowed to use that word to describe their product.

That other companies have Apple in their name is not surprising. A trademark is not universal. There are so called classes and when you apply for a trademark you have to specify which classes you want covered.

Source: I went through this process and applied and was granted a trademark for my company. Other companies hold a trademark on the same term but in different classes.


Many vocal users, tiny set of their overall users.


Agreed. The anti-Electron crusaders seem to be some of the loudest folks out there, too. 1Pass is great in my opinion, it's a shame it's being so negatively portrayed through-out this thread.


It's not like 1password is gonna delete your data or block you from accessing your data if payment fails and your account is frozen. It just gets into read-only mode.

They have a fairly generous policy is case you'd account is frozen: https://support.1password.com/frozen-account/


Shazam used to wow me, but then as others mentioned in the replies it's essentially matching the signature of the sound to the sounds in the database. If it's one of the song, it gets matched fairly quickly.

Wow blew my mind was when Google introduced 'hum and we'll recognize the song for you' in Google assistant: https://www.google.com/amp/s/blog.google/products/search/hum...

It works so well even with my shitty humming - even my girlfriend can't recognize what the song is but Google can. It doesn't even have the same signature as the original audio file, just similar hums in a noisy environment and it still works. Black magic fuckery.


> it's essentially matching the signature of the sound to the sounds in the database.

You aren't giving it enough credit. The algorithm uses just a few seconds from any part of the song, and has to deal with phone audio quality and often background noise. I mean, you can be in a bar with all that jabber and hold up the phone and it could pick out the song. The app on the phone does the preprocessing to the audio before it is sent to the server that does the matching ... using the comparatively miserable power of a 2001 era cell phone.


Oh that wasn't my intention - Shazam was and is groundbreaking, they did it when no one else could. All I meant was that it seems more "doable and I probably understand how it works" when compared to how Google assistant recognizes songs from my humming.


What is a signature? How is a signature computed from a noisy audio stream, over a mall speaker? How is a signature computed from an arbitrary starting point?


IIRC, it's uses a Fast Fourier Transform of the time delay between high notes in the song to generate a series of "hashes" that are stored a db. Those ids can be calculated locally on the phone and then its a simple db lookup to retrieve potential hits. When Shazam adds a song to the db, they compute a series of "hashes" so you can identify at any point in the tune.


Wow, that's fascinating! I just ended up down the rabbit hole reading Avery Wang's "An Industrial-Strength Audio Search Algorithm" (linked in this thread) - it's such a cool way of "fingerprinting" pieces of music data.


My original comment was from memory of reading a post about how it worked a few years ago. Looking at what you read, I think the gist of what said is right, though it seems they use a different algorithm than FFT.

Totally agree though. It is something that opened my mind to thinking of a way to solve that problem in a way that actually works. Shazam definitely looked like magic the first time I saw it work.



TL;DR (from skimming thru the paper) he figured that a song's spectrogram looks like a starry sky, so matching a song is like finding a constellation on the sky. How do you do it efficiently? By searching for simple features of your constellation, such as pairs or triples of bright stars - those can be pre-hashed to find matches instantaneously. Once a possible match is found, you compare the rest of the constellation. Nothing breathtaking, in other words. However, among all the men who talked, he was the one who both talked and did, and that's his achievement.


Brilliant stuff is easy to understand, a lot harder to come up with. I could do that! (With a little help from wikipedia, audio processing libraries, the answer sheet, and the knowledge that it's possible in the first place)

To me, this highlights how hashing is the closest thing programmers have to magic.


Create a compound signature. You don't just take one measurement but many measurements and then assess the probabilities. You may have people talking in a mall, but they will be in a narrow frequency band. Similarly you can analyze the repeating elements. Keep iterating and adding stuff until f(signal) performs well


The closest to the ideal signature?


> Wow blew my mind was when Google introduced 'hum and we'll recognize the song for you' in Google assistant

Their announcement actually made me roll my eyes a bit, as Soundhound had that functionality nearly a decade before. I had both SH and Shazam installed on my old phone for these usecases - now Shazam is baked into Siri so I don’t even have the app itself installed.


How well does Shazam work for you when you hum or sing a song?


I haven’t tried humming with Shazam recently, but I don’t think it worked well back when I did have the actual app. It works very well for music though. I used it around five times, just this Wednesday night at a concert, and it got every track for me.

Soundhound is what had humming “support” explicitly in its product description, and it worked pretty well from what I remember. It’s been long enough though that I may only be remembering the times it worked.


Doesn't work at all


What do you have to ask Siri to get this to work?


If you prefer to access it via your iPhone's control center, you can configure it that way in the control center settings. It is called "Music Recognition" there.


Nice I will have to check this out, control center is definitely a great little overlay but I haven't reliably figured out how to add things to it. I will investigate further.


For general information have a look here [0]. Also be aware that elements in the control center might even offer additional functionality, e.g. like setting the brightness of the flashlight. In this case instead of just switching the flashlight on by a tap on the button, keep the flashlight button pressed to bring up a slider to set the brightness. Just play around with the other control center elements to find out what is possible.

[0] https://support.apple.com/guide/iphone/iph59095ec58/ios


I usually say "Hey Siri, what song am I listening to?" but it works with a bunch of variations e.g.: "what song is this?"

There's also a bunch of other options to trigger Shazam, main way I use it is from the Control Center: https://support.apple.com/en-us/HT210331


If you have an Apple Watch, you can also set it as a home screen button, which is a lot more discreet in public.


I just say Hey Siri Shazam this.


“Hey Siri, what song is this” works


> Essentially matching the signature of the sound to the sounds in the database.

And Dall-E 2 is just doing fuzzy hashing of images with text keys.

Shazam continues to amaze me because it "just works", and still feels more magical to me than most of the AI out there since it directly solve a major problem I didn't even think was solvable "what is this song!!?"


I enjoy salsa dancing, but I don't know any Spanish, so I use that built-in Google functionality to hum various songs all the time to figure out what they're called.


Dude, spoiler alert. Did you miss the part where OP said they liked not knowing how it works??


xD, spoiling an algorithm


And the downvotes tell me some folks have absolutely no sense of humor


I honestly couldn't tell if you were serious.. the ?? should have given it away but I didn't notice.


There's another where you tap a beat with your space bar, and a website tries to guess the song.


I need to download the Google app (and I presume sign in) to use that feature? Count me out


From a quick glance, Notion has the highest valuation per employee (10 Billion USD for 350 employees).

Some of the companies are really bloated imo. Canva has 3100 employeee? Seriously? Yardi lists 8000 employees but no valuation? 8000 seems like a lot, no? That's the same as Stripe, that has a 90 billion USD valuation.


What is the basis of your statement that the factories are never inspected or the products are tested?

I interned at a pharmaceutical company's factory in India for the summer that used to regularly export medications to it's subsidiaries in the western world including United States. The standards are super high, we literally threw away millions of medications due to small uncertainty in a Quality control check.

Besides the local regulations and inspections, a team from FDA regularly visited the factory to audit it for like 10-15 days. Just for one factory. I distinctly remember this as we had a more americanized menu for lunch when the team visited LMAO.

So yeah, when you don't know or are not sure, please don't make statements like "factories are never inspected". It makes it seem like you know something, which you clearly don't.


There is clear documentation that the FDA rarely/never inspects overseas factories.

Sometimes once when they are opened and never again. There's just no funding because that's how our political forces fight, they defund things.

https://www.npr.org/transcripts/723545864


And what makes you think Amazon doesn't do that yet? Kiva systems did what Ocado does now a DECADE go. Amazon acquired kiva systems (infact, Amazon robotics was born out of this acquisition) long long ago and I'm sure can do a lot more than Ocado does now.


> And what makes you think Amazon doesn't do that yet?

Because I still see a lot of human beings employed picking things in Amazon warehouses.


I could be wrong, but I don't think Applied Intuition and Polymath have the same product or business plan. Applied Intuition has a high-fidelity simulator as the main product, Polymath has actual autonomy stack (hardware and software for real world robot) as the product with a low fidelity simulator that gives devs a playground before actually deploying it on a real robot. You can problem train your ML algorithms using the synthetic data from Applied Intuition but Polymath simulator doesn't serve that purpose - they are using real-world data to develop their autonomy stack.


This is exactly correct, Caladan from Polymath Robotics is just a cheap and easy way to get a bunch of developers to try out our (fairly basic so far) API, and to start thinking about the business logic and applications that could be built on top of this.

Our actual product is autonomy on real vehicles.

We don't plan to ever build any kind of high fidelity sim, just sticking to basic Gazebo or similar.


Spot on. Except we’re just the onboard software (BYO HW)


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: