Hacker Newsnew | past | comments | ask | show | jobs | submit | romanzubenko's commentslogin

We first noticed google login issues with our app, can't login with google anywhere now, Google Analytics is down as well.


This is actually clever, let the market decide the price and the worth of each book for training. Pricing per model might be tricky, instead annual licensing for training might be better pricing structure. Very quickly all big publishers and big labs might find very precisely what the fair price is to pay per book/catalogue.


Yeah, you dont want to price them way too high to where nobody will pay for them, maybe even have the dreaded "Contact us for Pricing" thing setup.


What’s the mechanism for sunlight/artificial light psoriasis treatment?


Amazing outcome, congratulations! Back in 2015 working at Gusto as the first growth engineer I remember reading vwo blog to understand how a/b testing works.


All of our initial growth came from either word of mouth or the content I was churning out on a daily basis.


If you are an ecom brand or tech startup - I'm CEO of Afternoon.co (YC F25) providing same services as Bench including year end tax filing, ready to onboard you asap, just email me at roman@afternoon.co


Amazing post, read through all blog posts in a single beat. I would be great to have a final blogpost on assembly process.

On a different note, it's mind-blowing that today one person today can do small scale design and manufacturing of a consumer electronics product. Super inspiring.


President of Iran is not head of state for Iran, you might be thinking of Supreme Leader of Iran, Ali Khamenei, who is very much alive.


Not just the complexity but the absurd amount of human effort behind to produce every object around us.

As John Collison tweeted: "As you become an adult, you realize that things around you weren't just always there; people made them happen. But only recently have I started to internalize how much tenacity everything requires. That hotel, that park, that railway. The world is a museum of passion projects."

https://twitter.com/collision/status/1529452415346302976


During the hackathon the team only did a simulated flight, not a real flight, so take the results on effectiveness with a grain of salt. In any environment with significant seasonal changes, localization based on google maps will be a lot harder.


Each 5 days, a satellite from sentinel mission take a picture of your location. It's 8 days for landsat mission. Those are publicly available data (I encourage everyone to use it for environment studies, I think any software people that care about the future should use it).

It's obviously not the same precision as the google map, and it needs update, but it's enough to take in account seasonal change and even brutal events (floods, war, fire, you name it).


Where can you find this data?



There is also https://apps.sentinel-hub.com/eo-browser/ that regroup landsat and sentinel under the same interface.



Hmm used the links shared below but the picture of my home is at least 4-6 months out of date. What am I missing?


I don't know where you live, but the default search at https://apps.sentinel-hub.com/eo-browser/ use sentinel-2 (the true color), 100% maximum cloud coverage (which mean any image), and the latest date.

So you should be able to find the tile of your region at a date close to today, definitely not 4-6 month.


Satellite images can be taken on a dependable schedule, but the weather doesn't always provide a clear view of the ground.


Occurred to me that in a war or over the water this wouldn’t be useful. But I think it will be a useful technology (that to be fair likely already exists), in addition to highly accurate dead reckoning systems, when GPS is knocked out or unreliable, as secondary fall back navigation.


> in a war … wouldn’t be useful

Why do you say that? Navigational techniques like this (developed and validated over longer timeframes of course) are precisely for war where you want to cause mayhem for your enemies who want to prevent you from doing that by jamming GPS.

This is not just an idea but we have already fielded systems.

> over the water this wouldn’t be useful

What is typically done with cruise missiles launched from sea that there is a wide sweep of the coast mapped where it is predicted to make landfall. How wide this zone has to be depends on the performance of the innertial guidance and the quality of the fix it is starting out with.


Well, landmarks have a tendency to change quickly in a war zone. Making whatever map material you have useless, or close to useless.

All the navigational methods predating GPS still work perfectly fine so.


For the human eye maybe. For a computer using statistics less so. Extracting signals under a mountain of noise is a long solved problem - all our modern communication is based on it.


You can get new satellite imagery ever day… (be it military ones if you're a major power, or just commercial one like your average OSINter)


Sure. And then you have to upload those new, and vetted, imagines to all your drones. Other nav data is much more stable.

Mind you, military hardware is not your smartphone, OTA updates are usually not a thong for various reasons.

The approach for sure is interesting so.


That is all really interesting speculation, but I'm not describing a system which could be, but one which is already available and fielded. In cruise missiles it is called DSMAC.

Here are some papers: https://secwww.jhuapl.edu/techdigest/Content/techdigest/pdf/...

https://apps.dtic.mil/sti/tr/pdf/ADA315439.pdf


Basically inertia guidance enhanced by terrain matching. Which is great, but terrain matching as a stand-alonenis pretty useless. And it still requires good map data. Fine for a cruise missile launched from a base or ship. Becomes an operational issue for cheap throw-away drones launched from the middle of nowhere.


Yes, that's also how it works right fucking now.


Well if you combine it with dead reckoning, I guess even a war torn field could be referenced against a pre-war image?

I mean, a prominent tree along a stone wall might be sufficient to be fairly sure, if you at least got some idea of the area you're in via dead reckoning.


And deadrecking is already standard in anything military anyways. For decades.

As an added data source to improve navigation accuracy, the approach sure is interesting (I am no expert in nav systems, just remotely familiar with some of them). Unless the approach was tried in real world scenarios, and developed to proper standards, we won't see it used in a milotary context so. Or civilian aerospace.

Especially since GPS is dirt cheap and works for most drone applications just fine (GPS, Galileo, Glanos doesn't matter).


For a loitering drone I imagine dead reckoning would cause significant drift unless corrected by external input. GPS is great when it's available but can be jammed.

I was thinking along the lines of preprocessing satellite images to extract prominent features, then using modern image processing to try to match against the observed features.

A quite underconstrained problem in general, but if you have a decent idea of where you should be due to dead reckoning, then perhaps quite doable?


You can't use visual key-points to navigate over open water.

You can use other things like Visual Odometer, but there are better sensors/techniques for that.

What it can do, if you have a big enough database onboard, and descriptors that are trained on the right thing, is give you a location when you hit land.


That's exactly what the comment you replied to was describing.


> You can't use visual key-points to navigate over open water.

No, but you can use the stars. Even during the day.


true, but that requires an accurate clock, and specialised hardware. Ideally you also need to be above the clouds as well.


For only $300 plus shipping from Ali Express you get a high accuracy inertial navigation system. Only weighs 10 grams.

The future is scary. It is now straightforward and inexpensive for lots of folks to construct jam-resistant Shahed-style drones. https://www.aliexpress.com/item/1005006499367697.html


And for a little less, you can buy the original, from Analog Devices.[1]

Those things are getting really good. The drift specs keep getting better - a few degrees per hour now. The last time I used that kind of thing it was many degrees per minute.

Linear motion is still a problem, because, if all you have is accelerometers, position and velocity error accumulates rapidly. A drone with a downward looking camera can get a vector from simple optical flow and use that to correct the IMU. Won't work very well over water, though.

[1] https://www.analog.com/en/products/ADIS16460.html


>Linear motion is still a problem, because, if all you have is accelerometers, position and velocity error accumulates rapidly.

An INS will usually need some kind of sensor fusion to become accurate anyways. Like how certain intercontinental ballistic missiles use stars (sometimes only a single one) as reference. But all these things are based on the assumption of clear sight and even this google maps image based navigation will fail if the weather is bad.


10^-5 degrees/hour drift was achieved in the 1970s, for ICBMs, at very high cost.


The laser ring gyro? It'll be fun when those start showing up on Aliexpress.


Sold by "Peace Dove Grocery Store."


Sounds expensive.


“High accuracy”


For oceans, they could use juvenile loggerhead turtles: https://www.reed.edu/biology/courses/BIO342/2011_syllabus/20...


Being able to nagivate using only a map stored locally sounds extremely useful in a war.


Don’t we have basically this but it looks at stars?


I guess if it's really a possibility for military use they won't use google maps...


So the article is fraud.


As with Stable Diffusion, text prompting will be the least controllable way to get useful output with this model. I can easily imagine midi being used as an input with control net to essentially get a neural synthesizer.


Yes. Since working on my AI melodies project (https://www.melodies.ai/) two years ago, I've been saying that producing a high-quality, finalized song from text won't be feasible or even desirable for a while, and it's better to focus on using AI in various aspects of music making that support the artist's process.


Text will be an important input channel for texture, sound type, voice type and so on. You can't just use input audio, that defeats the point of generating something new. You can't also only use MIDI, it still needs to know what sits behind those notes, what performance, what instrument. So we need multiple channels.


Emad hinted here on HN the last time this was discussed that they were experimenting with exactly that. It will come, by them or by someone else quickly.

Text-prompting is just a very coarse tool to quickly get some base to stand on, ControlNet is where the human creativity again enters.


Yeah, we build ComfyUI so you can imagine what is coming soon around that.

Need to add more stuff to my Soundcloud https://on.soundcloud.com/XrqNb


For music perhaps. For sound effects I think text prompting is the rather good UI.


Controlnet/img2img style where you can mimic a sound with your mouth and it then makes it realistic could also be usable.


I think it would be ideal if it could take the audio recording of humming or singing a melody together with a text prompt and spitting out a track that resembles it


1. Do your humming and pass it to something like Stable Audio with ControlNet

2. Convert/average the tone for each beat to generate something resembling a music sheet

3. Use vocaloid with LLM generated lyrics based on your prompt (or just put in your lyrics) and pass in the music file

4. Combine the 1-3

Would love to see this


But works great when you don’t need much control, prompt example: “Free-jazz solo by tenor saxophonist, no time signature.”


What other inputs besides text promoting is there for SD? Are you referring to img2img, controlnet, etc?


It's crazy that nobody cares. It seems to me that ML hype trends focus on denying skills and disproving creativity by denoising randoms into what are indistinguishable from human generation, and to me this whole chain of negatives don't seem to have proven its worth.


LLMs allow people without certain skills to be creative in forms of art that are inaccessible to them.

With Dalee - I can get an image of something I have in my head, without investing into watching hundreds of hours of Bob Ross(which I do anyway)

With audio generators - I can produce music that is in my head, without learning how to play an instrument or paying someone to do it. I have to arrange it correctly, but I can put out a techno track without spending years in learning the intricacies.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: