Hacker Newsnew | past | comments | ask | show | jobs | submit | more interpol_p's commentslogin

I wouldn't say it's "damn buggy" — I use Notes daily, and have a significant number of notes that are synced between devices. In my notes I use rich formatting, embed videos, voice memos and lots of images. It handles it really well. I even use iCloud Collaboration feature on a few notes for planning, and for splitting regular expenses

I have three notes that exhibit the bug you mention though: the three notes I keep for each of my children's artwork. I scan the artwork using the document scanning tool in Notes, and it gets embedded as a multi-page PDF (if the artwork itself has multiple parts) or a single PDF. After many years of adding high-res scans, when I scroll to the bottom of these files it takes some time for the note to render. I think I picked the wrong tool for the job here, more than anything!


It's easier to read and navigate a well-written Swift codebase than a well-written Objective-C codebase

Conversely, it's easier to debug an Objective-C app than a Swift app, simply because compiling and debugging is so much faster, and debugging so much more reliable

I don't know about a software quality drop being attributable to the migration to Swift. So many other things have also happened in that time — much more software that Apple produces is heavily reliant on network services, which they are not very good at. I find Apple's local-first software to be excellent (Final Cut Pro X, Logic, Keynote) and their network-first software is hit-or-miss

They have also saddled developers with a ton of APIs for working with their online services. Try to write correct and resilient application code that deals with files in iCloud? It's harder than it was to write an application that dealt with only local files a decade ago!

Swift is easy to blame, but I don't think it's responsible for poor software. Complexity is responsible for poor software, and we have so much more of that now days


I don't completely agree with you. Having used both SwiftUI and UIKit extensively, I value both of them and think they are both quite strong in different areas

I have published a word game written entirely in SwiftUI [1], the effects and animations would have been much more difficult to do in UIKit, and the app itself would have been hairier to write and maintain [2]. I also track crashes, and this particular app has had four crashes in the past year, so I am very pleased with the stability

That said, there are definitely times, as you say, where you have to drop to UIKit. For the word game mentioned above, I had to drop down to UIKit to observe low-level keyboard events in order to support hardware keyboard input without explicitly using a control that accepts text input

SwiftUI is mature, it's pretty advanced — especially for graphics and animation heavy UI. It has limitations, particularly around advanced input event handling, as well as the application/scene lifecycle

I plan to continue to use both UIKit and SwiftUI where they make sense. It's easy enough to bridge between them with UIHostingController and UIViewRepresentable

[1] https://retrogram.app

[2] Specific examples include: image and alpha masking is trivial in SwiftUI, Metal Shaders can be applied with a one-line modifier, gradients are easy and automatic, SwiftUI's Timeline+Canvas is very performant and more powerful than custom drawing with UIKit. Creating glows, textured text and images, blurs and geometry-based transitions is much easier in SwiftUI


Totally agree

But once we opened a Discord for our product we had so many more questions and users coming in. I do not like that it is locked up on a proprietary platform organised as a chat interface, but damn is it popular and often-used. Having users communicate with us more regularly is very motivating, and we've had so much more quality feedback by having it available

It is, unfortunately, where a lot of the people are and it makes your user base feel very "alive"


Why not use AnswerOverflow to make the chats indexed and searchable? It's definitely a lot better than nothing, multiple times it has already helped me find answers to questions I had.


Thanks for the suggestion! I had no idea this existed


> Why am I doing this? Understanding the business problem and value

> What do I need to do? Designing the solution conceptually

> How am I going to do it? Actually writing the code

This article claims that LLMs accelerate the last step in the above process, but that is not how I have been using them.

Writing the code is not a huge time sink — and sometimes LLMs write it. But in my experience, LLMs have assisted partially with all three areas of development outlined in the article.

For me, I often dump a lot of context into Claude or ChatGPT and ask "what are some potential refactorings of this codebase if I want to add feature X + here are the requirements."

This leads to a back-and-forth session where I get some inspiration about possible ways to implement a large scale change to introduce a feature that may be tricky to fit into an existing architecture. The LLM here serves as a notepad or sketchbook of ideas, one that can quickly read existing API that I may have written a decade ago.

I also often use LLMs at the very start to identify problems and come up with feature ideas. Something like "I would really like to do X in my product, but here's a screenshot of my UI and I'm at a bit of a loss for how to do this without redesigning from scratch. Can you think of intuitive ways to integrate this? Or are there other things I am not thinking of that may solve the same problem."

The times when I get LLMs to write code are the times when the problem is tightly defined and it is an insular component. When I let LLMs introduce changes into an existing, complex system, no matter how much context I give, I always end up having to go in and fix things by hand (with the risk that something I don't understand slips through).


yeah he's really underselling the recent models' ability to do the "Designing the solution conceptually" part. I still have to be in dialogue with the AI - I ask a lot of questions, we iterate towards something - but I cover conceptual ground much more quickly this way. and then I still have to be the glue between "design" and "writing" steps, and have to manage that carefully or I get slop.

if you look at the change in capability over time, it looks like the AI are climbing this hierarchy. "Centaur" seems to already be giving way towards "research assistant". I hesitate to make predictions but I would not place money on things stabilizing here.


> I would not place money on things stabilizing here.

Wise of you. If things don't stabilize we'll need our savings.


When wiring up my projector, I needed a 10 or 20 meter HDMI cable. The first one I got produced a snowy image on the screen — it wasn't like analogue static, but it was definitely a poor quality image. I replaced that cable with a more expensive one and the image looked correct. It surprised me that there would be a difference in HDMI cables, because I thought exactly the same way — a digital signal is a digital signal


This is what happens with a damaged or underspecced cable.

The HDMI standard doesn't have a way of telling you that you really need an HDMI 2.2 cable and you actually have an HDMI 1.x cable. It just tries to send the signal, and if the analogue bandwidth of the cable is insufficient, then the error correction will be insufficient and you'll get no signal or snow and blocks.

This is somewhat of a good thing, since many short HDMI 1.x cables will work for standards that require HDMI 2.x.


That's not really what digital implies, but you figured out the important part: When digital signals fail, they do so in a very obvious fashion. A worse cable won’t give you “less saturated blacks” or something else that's subtle, it will give you random bit errors that manifest as snow. If the picture isn't obviously bad, then it is as good as any cable will give you.


This isn’t even true with other common ‘digital’ cables.

Not all ‘Ethernet’ cables are the same. Someone will give you 100mbit. Some will give you a gigabit. Some will give you even more. They’ve all got RJ45 on them.

“All HDMI cables are the same” is an almost-baseless corruption of a very valid critique of Monster et al.


RJ45 is just the plug. Ethernet cables are labeled with category 5, 5e, 6 etc.


8P8C is the plug.

RJ45 is a wiring pin-out standard for that plug [1]. It's also a standard for telephony, not networking -- it carries one phone line. A gross waste of pins if you ask me.

[1] Not quite. An RJ45S plug has a tab on the side that will not insert into an 8P8C jack.


Calculations change at 10+ meters. Most cables are not rated past 15.


In this case I think they are missing a lot of deserved credit. A ton of UI paradigms, established by visionOS, are taken wholesale in XR. Even down to the styling of the developer docs

Good thread outlining the comparison

https://mastodon.social/@stroughtonsmith/11364102418590280


Broken link?



Anyone know how well / if this compares with BGFX?

https://github.com/bkaradzic/bgfx


There is a modification to this design that adds a button to the top to pop out the phone: https://www.yankodesign.com/2024/09/13/dieter-rams-inspired-...


Three of those really are very specific to string manipulation, and doing it "right" (with all the possible representations of what a string can be) is inherently complex. I think Swift landed on the wrong side of defaults for this API, opting for "completely safe and correct" over "defaults to doing what I expect 99% of the time"

You can get a `utf8` or `utf16` "view" of a string, and index it like normal array (`myString.utf8[0]` gets the first utf8 character). But it's not going to work with complicated emoji, or different languages that may have representations into utf16, etc. Again, I think the vast majority of people don't care for complete correctness across all possible string representations, and possibly Swift has gone too far here — as noted by all the Stack Overflow posts and clunky API

On the array-pass-by-reference, I'd argue that it's valuable to learn those semantics and they aren't particularly complicated. They offer huge benefits relating to bugs caused by unintentionally shared state. Marking the parameter `inout` is a small price to pay, and really forces you to be explicit about your intentions


Swift was designed around emojis it seems. First page in the manual shows how you can use emojis as variable names. I get why Apple wants to be clear how there are different ways to index into strings (even if this is motivated 99% by emojis like "family: man woman boy boy skintone B"), but still, the API didn't have to be this confusing or have so many breaking changes after GA.

About structs and pointers, I'm familiar with them in C/C++ where the syntax is clear and consistent. It's not consistent in Swift. And arrays don't even act like regular structs, I forgot: https://stackoverflow.com/questions/24450284/conflicting-def...


Or you know, the couple of languages people speak that are not using ASCII…


The issue here isn't ASCII vs unicode, it's specifically symbols that are composed of multiple unicode codepoints.


(Which btw isn't exclusive to emojis, there's also Hangul Jamo that the Unicode standard says is used for "archaic" Korean characters, but emojis are probably the main use case.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: