Hacker Newsnew | past | comments | ask | show | jobs | submit | more soultrees's commentslogin

Unfortunately the term UBI has negative connotations towards it for a large percentage of the population still so it’s going to be a huge fight to ever get UBI as it stands implemented.

But what I’m guessing will happen in the long run, Employment Insurance will relax it rules enough that almost everyone will qualify for it, therefore giving the guaranteed income through the EI program. It will be easier to refactor the rules in the EI program than it will be to create a whole new program from a political point to view.


You’re honestly doing yourself a disservice by using terminology like that. At the time it was a way to generalize a hysterical population in a hysterical time but nowadays, after science has had its time to do it’s thing, multiple studies have come out that have shown there were additional risks with the vaccine itself, also showing it wasn’t everything they had hyped it up to be, and the fact that it was basically forced upon the population as a doomsday cure - people were right to question what was happening.

Now that the dust has settled, people are more willing to come out and say they didn’t take the vaccine or are more honest in their opinions about what happened then, and you would be surprised at the amount of well educated, well respected individuals who decided to wait, or decided to avoid the vaccine but follow all other protocols but were afraid to speak up because of the absolute outrage coming from people who were generalizing everyone as crazy anti-vaxxers.

A functioning democracy needs people who question the official line, even just to stir the conversation. A healthy functioning democracy needs the freedom for people to speak up against, or at least question the official line and during COVID times, we didn’t have that.

Just remember, this us vs them mentality you have in your mind is all fabricated and depending on which “channel” you were tuned in to, all the news was pointed towards hating the other side more. We could all use a reminder to appreciate our many commonalities and not our few differences of our fellow humans.


> A functioning democracy needs people who question the official line

But that's not what's going on. What's going on is people questioning logic itself. At the time, we just had no better options, based on the knowledge so far. This being uncomfortable is a far cry from questioning the 'official line', it's just choosing to respond to feeling rather than logic, which is problematic.

Nobodg felt good about Covid and any of it's solutions. They were merely the least worst. Antivaxxers would do well to understand how logical reasoning works, and stop attributing choice, emotion, officialdom, whatnot, where there is none. It's a classic case of projection: because they think so, doesn't mean it's how everyone thinks.

Not doing the logical thing is always inferior thinking and should be recognised as such.


For me, the logic went as follows:

The elderly, the sickly, and the obese were the most at risk.

Being young and healthy, the risk to me was negligible.

The covid vaccines did not prevent transmission, so getting jabbed would not help eradicate the virus.

Because of the limited benefits the vaccine would provide to me and its non-sterilizing nature, I decided not to get vaccinated against covid.

In the end I got covid, was sick as a dog for a few days, then I was fine. I'm happy with my decision. But I would never suggest to anyone who got the covid vaccine that they made the wrong decision, it's an individual choice to make.


You are fortunate you did not acquire any of the long covid symptoms.

And yes, it turns out that some people have some innate immunity that we were unable to identify early in the pandemic. But nobody knew if they had that or not.

Your rolled some dice and came up lucky.


I'm not sure what you mean when you say: the vaccines did not prevent transmission. Getting a shot makes it less likely for the virus to reproduce in your body, so any aerosoles you emit will have fewer or no virus content, which is precisely how it helps reduce transmission, does it not?


He means 100% prevention.


But what intervention promises that?


I know, I agree. Maybe the media and scientists set expectations too high? Vaccine == Immune? Not sure.


I'm surprised you two are going on like this as if the purpose of vaccination hasn't been to provide immunization.


It's the first time I meet someone that seems to think immunity is binary, not what it AFAIK is: a fraction, usually quite near 100% but never it.


I dunno, hanging on every word from the group of people who’ve been handling AIDS in Africa with little oversight for the past few decades seems pretty illogical. I can now see why their methods may cause a lot distrust.


You can’t sit around arguing about emergency procedures in the middle of an emergency. That’s not democracy, that’s stalling for disaster.

Sort these things out ahead of time. If you have a problem, that’s when to complain. But people couldn’t be arsed, to the point that a certain future felon cancelled the program meant to deal with things like this.


If by people handling Aids in Africa you mean various church groups preaching abstinence (by number the largest group active on the subject), then yes, they indeed require immensely more oversight.

Please stop picking out the factoids from their contexts; you seem to be missing large parts of the picture.


There is a simple and scientific question: does this "leaked" DNA in COVID vaccines have adverse effects? Enough to revise vaccination policies, vaccine purification, or the vaccines themselves?

No people and no methods are relevant, the only enemy to be concerned with is a virus.


> multiple studies have come out that have shown there were additional risks with the vaccine itself

Citation?


You're honestly doing yourself a disservice expecting citations for such obvious helpful information gosh


If you don't want to take the vaccine for the betterment of all then fine, lots of other things you can do to help. Stay home. Mask up. Do more to help stop the spread.

COVID isn't over. And it's because people wouldn't do what was needed to make spaces safe again from such a contagious thing. That is still spreading, disabling, and killing.


I think we’re a little past taking the vaccine being for the betterment of all. Almost everyone I know has had COVID, vaccinated and unvaccinated alike. It may help you get through it easier but the vaccine doesn’t do much to stop the spread. I’ve known two separate families that were all vaccinated and one person brought it home and the rest all got it.

People need to continue their lives. You can’t honestly expect people to “stay home” for 3 years. They need to work, and live. It’s not going away, and nearly everyone has had it by now.


> Almost everyone I know has had COVID

The data certainly leans more towards your experience, but I find that shocking from my own experience. Of the people that I know whether or not they have had it (and obviously knowing positive is easier than negative), only about 50% have had it (and so far I am on the 50% that has not yet tested positive).

> the vaccine doesn’t do much to stop the spread

In fact, studies have shown that vaccines (as well as prior infection) does reduce, but not eliminate, the likelihood of infecting others.

One example: https://www.ucsf.edu/news/2022/12/424546/covid-19-vaccines-p...


> Vaccinated residents with breakthrough infections were significantly less likely to transmit them: 28% versus 36% for those who were unvaccinated. But the likelihood of transmission grew by 6% for every five weeks that passed since someone’s last vaccine shot.

Excuse me if I'm not wowed. As annoying as anti-vaxers as a whole are, the other side where the vaccines are viewed as an absolute panacea might've actually made us even worse off. I feel like there wasn't much of a political push to get better vaccines made because that would be admitting what we have isn't that great.


I have an 11 pro as my main workhorse. Multiple years old, constantly running the battery to 2 % then charging to 100 and it still lasts all day for me. I feel like the concern I see going on in this thread might be a bit misguided.


If we are sharing anecdotes, my wife and I got iphone xrs same Xmas. She regularly fully discharged hers, whereas I charged mine throughout the day. My phone very very rarely goes below 40 or even 50 percent. After couple of years, her battery life was awful and mine was fine.

Basically, I find individual stories vary, but statistics may show overall trends


My 11 pro was the same way. And then I got a 14 pro.. it has noticeably worse battery life, I suspect in part due to 5G which I have since disabled. I tend to keep it on the charging stand on my desk for video calls as it now has magsafe (or whatever it's called). It's constantly charging to 100% and I'd love to have a way to stop it from charging without having to remove it from the stand like a cave person.


Something to also ask yourself - what would it look like to be successful? After 5 years of no revenue, you’re probably comfortable at that step, and the trap is to always find things that need doing pre-launch because that’s what you’re used to. So it might be worth reframing it in your mind.


So I have a niche market that has messy data around all around and I was thinking of cleaning and packaging that data from multiple sources into 1 and sell it to the sales managers or BI teams for the companies within the industry but it’s just been an idea in my head.

Can anyone provide advice on how I should package it and sell it or any advice towards adding extra value?


Is it proprietary data or web scraped? This might change the strategy. If proprietary, with enough history, and relevant for the market you cover (and this market has listed companies operating in it), you shall consider selling it to hedge funds. There is an entire ecosystem for the so called "alternative data" you should look at. But big money is for 1) proprietary data 2) long historical trends


I’m interested in hearing more about this. It might not pertain to the data I’m thinking specifically as it would be scraped but finding out where the holes are and creating proprietary data around that would be a consideration. However, I’m interested in diving in deeper on alternative data if you have some solid sources I can brush up on.


It all depends on where/who you are, but I have thought about trying to get things listed on the Snowflake Marketplace https://other-docs.snowflake.com/en/collaboration/provider-b...


Thank you for this!


Very hard to give any advice here without telling us what this data is. Is it customer lists of technologies or products? Is it contact data? (Before you say you’re afraid to give your secret sauce away, nobody cares or has the time to implement your idea)


It’s less about the data specifically and what are some common add ons or ways to add additional value, knowing my target audience would be sales managers or key decision makers on the sales side of the industry. Obviously fancy dashboards are nice, and historical data is a want, but is there any other generalistic add ons one could add?


How do you go about collecting the data for these studies? Is it core sampling? And if so how many cores would you need to get a good reading?


Typically the analysis is done by digging trenches across the fault and dating pieces of charcoal found in the sediment with radiocarbon. This study is rare but not unique in using cored trees, or slices of stumps if possible, and dating the event with tree ring dating (dendrochronology), which is vastly superior when it's possible (i.e. when it is demonstrable that an earthquake kills trees). In both cases you want as much as you can get--ideally 5-10 ages at a minimum I think?


I actually think what’s happening is that it’s averaging out photography with such amazing tools in everyone’s hands, however we are seeing an outlier explosion of creative photography that we haven’t seen before.

That said, the dopamine hit comes from taking your own shots and seeing the visual perfectness Apple creates, rarely if ever will those shots give the same effect to a viewer. So they don’t need to be photography-perfect, they just need to appeal enough to our monkey brains and monkey eyes to get that shot of dopamine to want us to take more pictures, therefore using the phone and resulting cloud storage more.


The magic of phone cameras disappears in a moment when you get hold of a mirrorless for 5 minutes. Even a bottom end one is orders of magnitude better than the best phone camera even if it’s got a lot less megapixels.


I love my mirrorless, but it certainly wasn't 5 minutes. The first two lenses I tried (cheap ones obviously) were pretty underwhelming. Once I got a large aperture lens, I started to really get it. Even then, so many of my photos came out dark or blurry because I hadn't learned how to pick settings or focus for different lighting conditions and subject movement speeds. Autofocus on consumer cameras is pretty trash compared to iPhone/Pixel. EyeAF my ass.

These camera companies need to invest more in their software. Superzoom, night sight, subject tracking and smart autofocus should be table stakes. Auto mode on my mirrorless should at least be on par out of the box with my phone. It's sad that the pixel phones with very old Sony sensors can take better 10x pictures than mirrorless out of the box. They need to worry less about better lenses and sensors, and worry more about better onboard compute capabilities.


Lenses make SUCH a difference. A family member asked me to shoot their wedding (no pressure, right?) and since without me it wouldn’t have happened, I agreed. I also rented some L glass for my SLRs, and holy shit was that eye-opening. Turns out that a $2000 lens is objectively better than a $300 lens, who knew?

The clarity, the sharpness, the pop - everything was improved. Good glass is a bigger difference than the body.


Actually this is potentially wrong.

A $300 lens is objectively better than a $5 smartphone lens.

A $2000 lens may be objectively better than a $300 but it depends what you're standing in front of and you.

The Nikkor Z 28mm f/2.8 is my favourite so far and it wasn't exactly expensive.

The priority order for things is of course:

1. What the photographer is standing in front of

2. The photographer

3. The lens

4. The camera


This is also introducing the difference between zoom lenses and prime lenses. You can get a good 28mm lens for much less cost than a good 24-70mm zoom lens. Most novices in photography don't start nowadays with good prime lenses, but with cheap zoom lenses.


The 16-50 that came with my Z50 is really good too.


Great! Inexpensive zoom lenses are getting better all the time. And manufacturing processes are likely also improving. The gap is narrowing.

But, at least today, you still get enhanced features on the more expensive zoom lens, such as wider aperture, and a constant maximum aperture across the entire zoom range. Neither of those things necessarily yields a superior photograph -- you don't need f2.8 across the whole zoom range if you're taking pictures at f6 -- but they can be very helpful. If they're worth paying for depends on one's personal needs, desires, and budget.


>A $300 lens is objectively better than a $5 smartphone lens.

Not sure where you are getting the $5 figure from. In any case, smartphone lenses are manufactured in vastly higher quantities than lenses for interchangeable lens cameras, so it doesn't make sense to compare the per unit cost. Modern smartphone lenses are miracles of optical engineering. See e.g. https://news.ycombinator.com/item?id=30557578 The cost of the R&D that's gone into enabling their design and manufacture probably couldn't have been recuperated if they were being used only in cameras.


Fair enough; perhaps it’s fair to say that given a specific application or lens type, a more expensive one will generally be better than a cheaper one. For example, you can get any prime or zoom you want from Canon as a normal or L variety. The latter will cost about 10x as much, and will be better. 10x better is subjective.

On the flip side, my favorite macro was a Sigma 105mm prime. Tack-sharp, and cost well under $1000. Of course, I’ve never shot with the equivalent Canon L (which isn’t quite the same at 100mm, but close).


And the light. Good lighting can compensate for not so great sensor


Can you say a bit more what you think was the factor?

-- more control of depth of field / shallower DoF ability?

-- faster shutter speeds?

-- less chromatic aberrations?

It doesn't intuitively feel like sharpness should be a factor -- even cheap kit lenses usually get that right.


* wider f-stop * less chromatic aberration * less distortion generally * smaller circle of confusion

The chromatic aberration is an important but subtle effect. Remember that lenses are multiple pieces of glass, and every interface diffracts the wavelengths of light like a prism. One of the considerations in lens design is converging all those different wavelengths of light in the same place. Not just at one point, but at every point across the image plane.

Poor lenses might do the well in an area. Good lenses do it everywhere.


Sorry, should have clarified. The lens in particular that made me rethink everything else I had was a 70-200mm f/2.8L. Zooms in particular often suffer from sharpness and chromatic aberration issues compared to a prime due to the larger number of optics. This lens did not. I’m sure a comparable prime stuck next to it would still show it up, but coming from kit zoom lenses, it was quite a shocking difference.

The static aperture also helps tremendously of course, yes - nice bokeh with a tight zoom means you can easily get candid portraits that look great from anywhere in the room.


70-200 f2.8 L IS III is the Bentley of lenses, the Aston Martin, the Maybach, etc. you got the best hardware possible for the job. for the price it better be amazing! even the older ones without IS are excellent.

L glass is also a very interesting used market - those things basically don't lose value IME.


It was the IS II at the time, but yes - an absolutely spectacular piece of kit. I think it was about $100 to rent for the weekend? Very reasonable IMO, and made me realize that one could quite easily bootstrap a wedding photography business without actually owning gear.

Other than the actual business side of things, pesky details like getting clients. And the massive stress of shooting a wedding. I was happy to do it gratis for family, but I don’t think I’d want to deal with paying clients.


When I was still shooting Canon, I used a 70-200mm f/4L which I picked up for a song (C$~600 sixteen years ago?). Not the beauty of a 2.8, but having a consistent 4 made for some beautiful shots on Cape Breton.


Lenses affect color contrast too. I don't fully grasp it but internal reflections adding neutral white bias or correction tradeoffs between geometry and color or something. Aperture can be widened as much as lens barrel allows so that isn't it.


(Guessing the faster glass.)


I feel the same comparing my iphone and mirrorless. It's obvious software is years behind in almost every aspect; even relatively easy fixes like the horrible designed and unintuitive UI choices where the same mistakes are made year after year despite complaints...ugh! The last thing I need when taking pictures is fumbling around in 5 layers of menus to change important settings while my subject moves on and the moment has past. It almost feels on purpose as if the though is that added complexity is some proxy for it being "professional"

If processing power is one of the bottleneck to get some of the features phones can do it would be great if there was a universal hotshoe-like way to mount phones to camera bodies to use the screen, touch capabilities, and offload processing power, maybe with all phones now having USB-C its more of a possibility. If the camera makers don't do it I wouldn't be surprised if Apple/Google eventually do and eat their lunch.


> relatively easy fixes like the horrible designed and unintuitive UI choices where the same mistakes are made year after year despite complaints...ugh!

This is exactly the complaint I have about car manufacturers not having enough / the “right” digital UX experience.


My mirrorless definitely beats my iPhone. But I have to put in more time, it’s not with me at all times and I need to transfer the images.

In the end, the best camera is the one I have with me. And if I can take a pretty good portrait shot of my kid while I’m at a diner then I’m happy.

To me, the memory/moment is more important than the “quality”.


The micro 4/3 M.Zuiko 45mm 1.8 is the bees knees - but the process of getting a useful shot into my family icloud album is so much work I rarely pull it out. Mirrorless really should be bodies with sensors and a thunderbolt connection to a smartphone.


Affinity Photo will let you export an image into the Photo library directly from the editor. I use it to do minimal edits (a bit of crop, exposure, maybe highlights/shadows) and then go File > Share > Add to Photos. It's a great workflow for a hobby photographer like me and I like that it is a perpetual license. Adobe products will let a pro fly through hundreds of images a day, but this is more than enough for quick edits and dumping the files into iCloud.

They offer a free trial, try it out! (I am not affiliated, just a happy user)

https://store.serif.com/en-gb/checkout/?basket=ed0b917180520...


Nikon's SnapBridge does that very well. Mine is tethered to my phone via WiFi when I'm out. Straight into my photo stream.


I bought a $1200 mirrorless which was supposedly the best in class a couple of years ago. All my photos look like they were shot on a potato compared to my iphone.

Not to mention that I don't walk around with that mirrorless camera in my pocket at almost all times.


I spent that and mine don't look like it's shot on a potato. Are you sure you know what you are doing?

My Z50 and kit lens fits in my pocket fine.


Thats the whole point, I don’t know what I am doing.


Fix that. Then you have a right to complain :)


I once asked someone with a nice piece of kit that wasn't too far from mine in cost. They said they sorted top to bottom on DxOMark list and bought one on the top and they didn't even know what a prime is. But that approach seemed to work.


put aperture, iso, focus, shutter speed on auto (if it's not a manual lens) and you will get good pictures


I spent about that on a mirrorless and my photos blow smartphones out of the water.

It takes 10x longer to setup, take, and post-process and is a hassle for many reasons, but the photo quality is extremely noticeably better.


How much did you spend on the lens that went on it?

Camera bodies will keep being updated. Glass is updated, but a lot more stable.

If I have a friend who wants to invest, say, $3,000 in a camera setup, I'd tell them to get a $1,000 body and $2,000 of lenses. A couple have thought they'd buy a $2,500 body and $500 lens, and I explained why they might be disappointed with that investment.


I agree but for the average user who takes sunset pics, kid pics or pet pics and then view those pictures on the same device they shot it on, apple’s incentive isn’t so much about competing with full-frame mirrorless cameras, but instead to make the pictures shot on the iPhone look as good as possible on the iPhone. That way, the shooter gets the dopamine hit when they shoot something that triggers our visual sense in a positive way.

My Sony a7iv gives me the same dopamine hit, maybe more so as there’s no better feeling than getting home, loading your footage into DaVinci and see that your exposure, focus and colours are nailed (on the other side of the coin, it’s a huge punch in the gut to get home and see your focus a little off, giving the opposite effect of a dopamine hit). But it’s more of a process to get there and the average user needs a faster feedback loop from shot to hit.


Ehh, no.

My typical test is to take a photo of the full moon. It works acceptably well on an iPhone (or Android). My recent Pixel phone even adjusts the brightness automatically. Sure, the lens is pretty wide-angle, so the pictures don't have many details.

I had to fiddle around for 20 minutes with settings on my Sony Alpha camera, eventually using manual focus and manual exposure. The pictures are, of course, better because of the lens and the full-frame sensor.

But the user experience is just sad. So I often just don't bother to take my camera with me on my trips anymore.

Also, a note to camera makers: USE ANDROID INSTEAD OF YOUR CRAPPY HOME-GROWN SHITWARE. Add 5G, normal WiFi, GPS, Play Store, a good touchscreen. You'll have an instant hit.


No. That's a hill I'll die on.

Ass end Nikon Z50, 250mm kit lens, hand held, no setup really other than shutter priority ... https://imgur.com/edCyNjV (very heavy crop!)

And a Pixel 6a mutilating a shot: https://imgur.com/290gXkU

I do not want android on a phone. I don't want to update or reboot it. I want to turn it on, use it and turn it off again. And I don't want someone substituting pictures of the moon for stuff online (hey Samsung!)


Wow, that Pixel 6 shot is awesome in all the wrong ways. I have no idea how it could have happened.

Of course, cameras with large sensors and lenses are going to be better than small phone sensors. Physics is physics. It's just that it doesn't matter that much for most people (me included).

> I do not want android on a phone. I don't want to update or reboot it.

I have used Galaxy Camera back in 2014. It was awesome. I could take pictures, and automatically upload them to Picassa (RIP) or share them with people. The UI was also pretty good, but it was clearly a V1 without too much polish.

> I want to turn it on, use it and turn it off again.

I have an Onyx book reader that runs Android. It works just like this. I pick it up, press a button, and it shows the book I've been reading within a second. So it's clearly possible.


Great example of the primary difference here. I've said that the photos out of my mirrorless (also Z50, great camera) are true photographs in the sense that they capture light and show it to me.

My smartphone however does not create photos, it creates digital art based on the scene. Your Pixel image is a perfect example of how algorithms (now called "AI") re-paint a scene in a way that resembles reality when zoomed out.

Comparing smartphone and camera is really apples to oranges at this point, as smartphones aren't even capturing photos, they're entirely repainting scenes.


> Comparing smartphone and camera is really apples to oranges at this point, as smartphones aren't even capturing photos, they're entirely repainting scenes.

Calm down, it's not that bad. Take for example night sight or astrophotography; it's using ML to intelligently stitch together light across time because available light in one moment is not enough to capture anything intelligible. Your end result is an accurate representation of what your eyes see (e.g. my own face in a nighttime selfie) and what is sitting there in the sky (the stars). You can call that repainting, but I disagree, it's more information aggregation over the temporal dimension.

Super resolution is similar, using shakes in your hand to gather higher resolution than you can accomplish with a single frame of data from your low res sensor grid. 2-3x digital zoom with super res technology is actually getting more information and more like optical zoom. It's not just cropping+interpolating.

Now...portrait mode. That's clearly just post-processing. But also...does blurring the background using lens focus have any additional merit vs doing it in post (besides your "purity"-driven feelings about it)?

At the end of the day, I want my mirrorless to do more than be a dumb light capture machine. I spent $X thousand+ for a great lens and sensor, so I want to maximize. It should do more to compensate automatically for bad lighting, motion blur, etc. It should try harder to understand what I want to focus on. As a photographer, I should get to think more about what photo I want taken and think less about what steps I need to take to accomplish that. My iPhone typically does a better job of this than my $X000 mirrorless. So I use my iPhone more.


> Take for example night sight or astrophotography

Oh speaking of astrophotography. It occured to me that all those pretty images of remote planets and nebulas have been doctored to hell and back.

What I don't know is where I can find space images that show the visible spectrum - i.e. what I'd see if i managed to travel there and look out the window.

Is there such a thing?


Well, you're of course using the best example on the one side and the worst on the other side, so that's not really a fair comparison.

Apart from that: The phones generally try to substitute the tiny sensors through highly complex software algorithms, creating something that sometimes only has a broad similarity with the original scene. The cameras, on the opposite, usually have crappy software and rely on their great sensors (and other hardware). So in an ideal world, you'd have a proper camera with good software. That software then doesn't have to do all the (good or bad) stuff which is only there to try to make the best out of less than ideal image input, but instead it can provide more user friendly features which allow making quick and easy photos without having to study tutorials for a week (yes, now I am exaggerating a little on purpose :)).

This software doesn't have to do all the crap that in any way reduces the image quality in the end.

Please don't just think in the extremes, but look for the healthy middle way that would provide the best out of both worlds.

It is not Android that does the image processing itself btw., but special software that the phone manufacturers add on top of Android. So this part would be the responsibility of the camera's manufacturer again, but this time they could focus more on their central use case (help making good pictures) instead of writing everything (like the user interface) themselves. And they could even provide their users with more options to extend their software for even better photos.


Please, no. A camera need to be ready to shoot the moment I flip it on.

I also don't want it to have reduced battery life just so that I can use the god-awful playstore on it.


My eBook reader uses Android, and it's instant-on, after a week on my couch.

Instant-on Android devices are a solved problem.


I bet it's not coming from a cold start every time.


Of course. It does an equivalent of suspend-to-RAM after a few minutes of inactivity. It then can stay in this state for at least several weeks. I'm not sure for how long, I have never left my book reader for more than two weeks.

Cold reboot after updates takes about 20 seconds, like my phone.

This strategy can work for cameras.


Except for when it's fully off and you want to take a picture.


So turn your camera on once when you pack your things for the flight. It can stay in sleep mode for at least a couple of weeks without draining the battery.

Honestly, I don't see a problem.


Does the phone really take a sharp picture of the moon, though, or does it just add detail it knows is there.


At this point I would've imagined Apple would have a moon detection feature and just replace it with a stock image cutout when detected in the field of view.



Samsung actually do that.


I love my d7100 and z5. There are some pictures only they can take over a phone, but it would take a user much longer than 5 minutes to beat their iPhone. I’ve been carrying the current gen iPhone and my larger camera for years and I use the iPhone more and more. The shots are good and easy to set up. I mostly keep a zoom on my larger camera now to give me reach and often use the iPhone otherwise.


The magic of phone cameras lies in their convenience.


Only applies to the small portion of the population that enjoys the process. I could never appreciate digital cameras. Take a bunch of shots , then go home and filter those shots for the good ones, then adjust the color of those shots. No thanks, not my cup of tea.

Funny enough I enjoy shooting film over digital. A lot less work and decision to be made.


I shoot film as well. It's definitely not less work but it is enjoyable.

As for convenience, the DSLRs can be tethered to your phone now. I shoot on mine and they go to Apple Photos.


For myself at least there is less mental load shooting film compared to digital. I am not taking multiple shots of the same thing generally and I don't develop the film myself. Historically the two things I did not like about digital was too many photos to review and having to work on each photo at home. There is something about not having the choice of which shot to pick and how to adjust the colors that is nice.

I have been interested in some of the micro 4/3 cameras that have prebuilt filters in them but I think film for me is king if I have a camera.


You can run a digital camera like that too. I am more interested in composition than camera set up and spend most of my time shooting in aperture priority. At best I'll tweak the white balance but the camera mostly just deals with that for me. I take few photos. I spent 16 days on a trek recently and took about 50 photos in total.


Phone cameras are digital cameras too.


Ok let me clarify for you. In this context I am talking about point-and-shoot cameras, bridge cameras, DSLRs and mirrorless. Everything but a phone camera.


I’ve gotten far more amazing photographs (including exhibit quality large format ones) since smart phones became a thing than I ever had with an SLR. Because I always have the phone handy.

If you’re always walking around with a camera bag? Sure. If you’re regularly in beautiful situations without one? Eh…


Basically every news event in the last 15 years is caught on phone cameras. That's the magic. A device with which you can start streaming to the world in 30 seconds.


It's mainly zillions of photos of kids and pets and food, so it doesn't matter if other viewers aren't impressed, they're impressive to the person who took them.


Please point to some of your favorite examples of this “creative explosion”.

Here’s one I posted several months ago on intentional camera movement photography:

https://news.ycombinator.com/item?id=34858318


Yeah, from what I can tell, the vast majority of people's photos are only ever viewed on a phone.


OpenAI is amassing an ungodly amount of though, there’s efficiency efforts that have been made for sure but scooping all that data up is what they need to train for gpt5. Those chat logs are worth their proverbial weight in gold.


I think it’s because BM has a cult following and part of that is because BMs software is notoriously high quality and they seem to be doing things the right way bg avoiding subscriptions and putting real resources into engineering effort for their cameras but also the software.

I, for one, was excited to see a BM product available on my iPhone now so I can see why others are just as excited. Google has made the front page for less noteworthy apps before I’m sure.


The thing about LLMs as a teacher is that the feedback loop is so short that progress can be made quite fast. Being able to ask a question, regardless of how stupid or trivial it may seem, without fear of judgement is a huge learning hurdle that is overcome. Not only that but reverse engineering other code “if I have this code and wanted it to do this instead - what and how would we change? Etc”, and a short feedback loop for errors where you might have to copy and paste something a lot before it works, makes LLMs the ultimate teachers.


I agree about short feedback loops, but this was there long before LLMs, and got me into programming initially.

I love that I could compile my program and try it without needing to check it with a teacher or superior. It cost literally nothing to try, and if it failed the computer would tell you it was wrong.

In a sense you are constantly bouncing off of reality (or the limits of the compiler) and that allows you to learn really quickly by experiment. LLMs with their tendency to hallucinate or just provide you with the solution seem like they could be both a blessing and a curse (for learning).


Providing the solution is one thing, but being able to tell you more about the solution, why it is a certain way, what other examples are, what to look out for, etc. having an infinite dialog about the problem space, that's where it really shines. Banging my head against compiler output and combing through Google to figure out _why_ it failed isn't nearly as productive, imo. Otherwise yes, hallucinations are an issue, though I find them not nearly as detrimental as I thought I would, then again I'm not learning to program, I'm merely learning new frameworks or techniques, I already have the underlying foundation and can sniff out hallucinations probably better than a total novice.


I don't know about this take. A lot of what you say makes sense but your conclusion seems like a bit of a stretch.

LLMs could be very useful for learning and probably are, but they also allow you to ask the wrong questions and circumvent the learning you were supposed to be doing.

I wouldn't be surprised to see people using it to simply solve exercises for them, and those people will have a hard time trying to do things on their own.


It's actually worse than that because although LLMs are capable of a lot, they still hallucinate quite frequently and sometimes are just plain stupid.

Recently I was trying to manipulate data in Google Sheets and was using ChatGPT to help. In the beginning is was fantastic, I was very productive because I didn't have to stop and think about formulas, read crappy documentation, or analyze data transformation. ChatGPT just gave me the right answer in a split second, as long as I kept asking the right questions.

Then I stumbled upon a particular issue that wasn't really too complicated but ChatGPT could not give me a correct answer. Unknowingly I spent 3 days trying to fix problems with the solution and every time I got an answer that was slightly wrong, not in subtle ways.

I have 20 years experience as a software engineer and still, I continued to waste time in this loop. After 3 days I decided to apply my engineering skills and solved the matter in 30 min. Now I know the solution and it was way simpler than I thought.

What surprised me was how dumb the whole process was. My questions certainly weren't the problem - as bad as they possibly were, the solution wasn't too far off. Not only ChatGPT had become a crutch but it put me in a situation that no human-tutor would ever put me.

So removing the pain of quick interactions with a tutor has benefits but the technology is not quite ready to be considered as true guidance in forming (or even helping) someone's understanding of a subject.

I've been using ChatGPT a lot for general language but when it requires logical thinking it falls pretty flat.


I've experienced this exact frustration loop before, but I've also got a major counter-example.

I've been using GPT4 as a tool to interrogate lesson transcripts for a language I'm learning and mention in the prompt to specifically focus on things mentioned in the transcript, if it's not in the transcript, check the helper script I update as I move on through the process (which does sadly take up more and more context window) and figure out if my answer is in one of those previous lessons, and to not guess. Hallucinations are quite rare, I don't think I can name an egregious instance of it in the 25 lesson I've done of approximately 20 minutes in length each, though I'm sure it's happened.

It's also pretty good at suggesting drills based on the contents of the lesson, there are probably a whole bunch of lesson plans in the training data.

The end result has been progress at a pace I could only dream of previously, and it doesn't matter if a question is too basic because I'm asking a computer. There is zero concern of any question being embarrassing because it's only between me, GPT4, and the OAI engineer who happens across the conversaiton.


I absolutely have the same overall feeling, when the task at hand is related to text processing (including understanding and spinning off new takes or ideas from it).

But when the task at hand involves logical thinking, that's when I believe the LLMs of today are still a very much work in progress.

So I'm skeptical of trying to use them as tutors for now. I'm sure things will evolve quite quickly from now.


I'm skeptical of this takeaway.

"I spent days trying to solve X by doing Y, then it turned out I could have solved it by doing Z instead" is an experience I've had countless times before LLMs were a thing. Sometimes you really do need these three days of stumbling before you can build up the confidence to do the easy solution.

(Then again, I don't know the specifics of your case.)


In addition, I feel like you build a sort of intuition after a while and detect when the conversation with the LLM has hit a dead end. When to stop and take a step back, think what it is you're trying to do and try to go down a different path. The LLM can even support with that, it's on the human to kick that off, at least at this point in time.


I can't really disagree with you. Although I want to believe this problem is more exacerbated with LLMs than with human-tutors.

I have no evidence other than hundreds of hours using ChatGPT.


> Although I want to believe this problem is more exacerbated with LLMs than with human-tutors.

Well, yeah, but the central insight here is that LLMs enable a worse-is-better approach with a tighter feedback loop. They're not as good as a regular tutor, but a regular tutor costs money, gets impatient, is only available at set hours, etc.

Part of the appeal of learning-by-LLM is that you can get a flash of motivation at 2 AM and go "hey, I should totally learn about X, that would be cool!", open up ChatGPT, ask some very naive questions, and get just enough to get started.


(It's funny, when I wrote this yesterday I thought "this is an unrealistic example", and yet here I am, at 1 AM, asking ChatGPT a bunch of questions about Google's XLS. I'm pretty sure the answers I got were hallucinations, but at least it helped me formulate the questions for when I go to the mailing list.)


> I wouldn't be surprised to see people using it to simply solve exercises for them, and those people will have a hard time trying to do things on their own.

This is a more recent version of the argument against most machines and similar automation tools. There is usually some validity to it, but we saw the same arguments against MOOCs, calculators, computers, type writers, smart phones, spell checkers, etc.


They should be smart enough to know when you ask the wrong questions. Perhaps they aren't there yet, but that's likely where they are going.


Which is also an interesting problem isn't it? The majority of people will only be doing what the LLM suggests...


The LLM's best role is this context is as a trusted, knowledgeable advisor/teacher. It works so well because there's no negative psychology that can be introduced, because you know the LLM isn't a human.

Like the commenter for this thread was saying, there's no fear of asking a stupid question. Or going to slowly or too quickly.


Absolutely. Fail fast, fail often. It makes it very easy to quickly try out ideas before you go down a rabbit hole. However I think for people that are completely new to programming it could quickly become a crutch.


Seconded. And it's not just programming. Any subject that is easy to moderately verifiable like most Math and Science in school can also benefit from this. We're one step closer to personal tutors and autodidactism for everyone. The other half of the learning journey would be arranging the curriculum for yourself.

If only more people would focus on cultivating learning instead of locking down tech like this because they're afraid (and perhaps lazy to create new teaching methods).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: