Hacker News new | past | comments | ask | show | jobs | submit login
Swift Programming Language Evolution (github.com/apple)
175 points by rufus42 on May 9, 2016 | hide | past | favorite | 173 comments



To me, the most interesting empirical result of the Swift language evolution so far is the lack of urgency on language-level concurrency.

If they think they can be competitive for years (at the very least vs. Objective-C) with only platform-level concurrency support, it makes me wonder whether concurrency is generally better provided by the platform instead of the language for most needs.

Conventional wisdom would have it that for a systems & application language designed in 2014, a good concurrency story would be close to job #1.


Go (which I think you're alluding to) has language-level concurrency because its syntax is extremely rigid. Swift's syntax is flexible enough that this isn't needed.

Specifically:

* Support for annotations; the compiler/runtime can be informed about safety and marshaling concerns. * Anonymous function bodies. You can implement Go's "go func() { ... }()" pattern yourself, and a concurrency runtime can implement it. Rust uses the exact same pattern. * Generic iterators/streams. No need for channels as language primitive since you can write a generic channel implementation.

Go is nice, but its language-level concurrency is very much a side-effect of its intentionally impoverished type system. For example, the built-in "chan" type exists because otherwise a generic channel implementation would have to use interface{}, which is not type-safe and would be hell to work with.

Here's what's lining up to become Swift 4.0's concurrency support (all the concurrency models!): https://github.com/apple/swift/blob/master/docs/proposals/Co....


I like this particular example of how easy it can be to make Swift act like it has language-level concurrency features: https://github.com/beeth0ven/BNQueue/blob/master/BNQueue/BNQ...

Some enum and extension magic easily lets you write things like:

  Queue.UserInitiated.execute {
    let url = NSURL(string: "http://image.jpg")!
    let data = NSData(contentsOfURL: url)!
    let image = UIImage(data: data)

    Queue.Main.execute {
      imageView.image = image
    }
  }


That's awesome. Blocks were always my favourite part of Ruby, and I think it's really cool that it has been adopted into Swift and (to a lesser extent) Rust.


I'm not sure if Swift really needs language-level concurrency support. As it stands, you can wrap any C library which provides some sort of concurrency and make it feel like native functionality because of Swift's syntactic sugar such as trailing closures. For example, I'm a huge fan of the library Venice[0] which provides CSP by wrapping libmill (single-threaded like libuv, uses setjmp/longjmp instead of callbacks unlike libuv), essentially providing a Go-level api without language-level support. Its what Zewo[1] is built off of, and what allows all of its api's to be synchronous without any extra effort.

[0] https://github.com/VeniceX/Venice

[1] https://github.com/Zewo/Zewo


Yes, if Swift team could continue from Zewo and Venice, it will save lot of resources to build from ground up.


I think whether to deeply integrate concurrency into a framework or not is quite an interesting tradeoff. Without a preferred concurrency solution (like Swift, C++, Java, ...) users of the language are can leverage from a lot of different concurrency solutions, from real threads to eventloops/Rx and everything in between. However I think in the meantime that this hurts the ecosystem around the language. When some libraries built around primitive 1 (which might require an eventloop or async/await) and others around primitive 2 (M:N scheduled fibers) these libraries might not be easily combinable in your application. Languages with a preset solution (like Go or Erlang) avoid this problem.


Here is my proposal for syntax support for the existing callback+libdispatch patterns: https://gist.github.com/oleganza/7342ed829bddd86f740a#async-...

The idea is to flatten the syntax without change of the runtime model in the same spirit they did error handling: feels like exceptions, but without ugly stack manipulations.


So Swift has been out awhile, what do people think of it? What other language would you compare it to (e.g. C#, C, C++, etc)? What about the libraries are they well laid out?


My favorite thing about Swift is that is seems to get out of the way - and when it gets in the way it's usually with a nifty language feature (like the { $0 + $1 } closure syntax). I'm very excited for the future - between Go and Swift we now have two compiled fast languages that are almost as expressive as their slower dynamic/interpreted cousins.

As an aside, I like that Apple is betting the farm on ARC. I wish I could have been a fly on the wall when they were discussing ARC versus GC.


There is a pretty good comment from Chris Lattner about ARC versus GC : https://lists.swift.org/pipermail/swift-evolution/Week-of-Mo...


I really like the `deinit` construct that comes with ARC, which lets you know when an object is about to be deallocated. Makes it much easier to find memory leaks, imo.


Indeed. `deinit` has saved me from things that would have caused much greater problems later on. It's probably one of the more subtle features that one would miss the most when moving to a language that doesn't have it.


awesome!


I love go but if one thing that it is not is - expressive, especially when compared to its dynamic cousins like Ruby/Python. That is a tradeoff I am willing to live with, but there is no need to get starry eyed over it.


Can you give an example? I've been writing a bunch of swift recently and I actually find it somewhat more expressive than either of those.


GP was talking about Go. Not to beat a dead horse, but not having generics results in writing similar code over and over again, eg working with collections


ah, apparently i'm blind. yeah, swift generics are nice even though they can be somewhat tricky to work with.


I'm always a bit fascinated when what everyone knows is the next thing turns out not to be. RISC is a classic example. (ARM is light CISC compared to the sort of true minimal RISC I'm talking about.)

GC is borderline but it feels like it might be one of those. In retrospect one giveaway is how easy it is to avoid almost all memory problems in C++ with RAII and STL.


> In retrospect one giveaway is how easy it is to avoid almost all memory problems in C++ with RAII and STL.

Not at all. It's easy to think you're avoiding those memory problems, but they invariably crop up again and again.


>I wish I could have been a fly on the wall when they were discussing ARC versus GC.

Apple shipped a tracing GC (RC is a form of GC) for a while, but couldn't get it to work reliably or with adequate performance. ARC was a bit of a "Hail Mary" and is problematic in its own right, but certainly better than the GC it replaced.

Marcel


> ARC was a bit of a "Hail Mary" and is problematic in its own right

I'm just curious with your experience what you are pointing to as problematic. Other than potential extra release calls in tight loops, the main downside I saw was it made the use of C structs way less appealing---at the same time, I wouldn't want to with manually memory managed C structs in GCD blocks.


Problems:

- Lots of extra retains/releases can cause significant and unpredictable performance degradation, and even crashes: http://blog.metaobject.com/2014/06/compiler-writers-gone-wil...

- The language went from fun dynamic, optionally statically typed to strict static typing that restricts exploratory programming: http://blog.metaobject.com/2014/03/cargo-cult-typing-or-obje... http://blog.metaobject.com/2014/05/the-spidy-subset-or-avoid...

- People no longer write class-side convenience initialisers :-((

- The benefits are mostly marginal, and the non-marginal benefits are needlessly bundled with ARC

- The C interaction model of the Objective-C hybrid language was made much more complex, somewhat defeating the purpose of a hybrid language

That's off the top of my head.


For anyone curious as to what ARC is in this context (not the dialect of Lisp HN is written in): https://en.wikipedia.org/wiki/Automatic_Reference_Counting

Last time I did any Objective-C you had to retain/release yourself so the auto stuff is interesting. As far as I can tell the benefit over Garbage Collection is that GC only works well when you have lots of excess spare memory, which is constrained on mobile devices.


There are hybrid systems that combine the prompt deallocation of pure reference counting with the superior throughput of tracing garbage collection.

Part of the reason I really dislike the "reference counting vs. garbage collection" debate, and keep emphasizing that reference counting is garbage collection, is that it sets up this false dichotomy. In reality, there are all sorts of automatic memory management schemes that combine aspects of reference counting with tracing in different ways, most of which were created in the 80s and 90s. Sadly, almost nobody in industry is aware of this enormous body of work, and the simplistic "RC vs. GC" model has stuck in everyone's heads. :(


Not just is reference counting a form of garbage collection (as pcwalton pointed out), it is also not the case that you had to retain/release stuff yourself, certainly not since Objective-C 2.0's properties.

Here is the code to define and use a property pre ARC with properties:

   @property NSString *name;
   ...
   object.name = @"Marcel";
And here is the same code with ARC:

   @property NSString *name;
   ...
   object.name = @"Marcel";

Spot the difference? Now it turns out that there are some differences, such as automatic generation of -dealloc methods and weak references and some cosmetic stuff. But overall, it's at best a subtle difference and for most code you won't be able to tell the difference.

Pre Objective-C 2.0, there were solutions such as AccessorMacros[1], which handled the same use-cases except without the dot syntax (which is somewhat questionable) and have the advantage of being user-defined and user-extensible, so for example if you want a lazy accessor, you don't have to wait for your liege lord, er language supplier to add them, or create a whole new language to do the trick. Instead, you just write 4-5 lines of code and: done!

[1] https://github.com/mpw/MPWFoundation/blob/master/Classes/Acc...


This is one of the most uninformed posts I have read in a while. As someone who has been developing in Objective C for the last 6 years, and been through the transition of MRC to ARC, none of what is stated in this post is accurate.


Actually, all of it is accurate. Since you're spouting off your credentials as the only evidence for why what I wrote is wrong [not sure how that works], here are mine:

- programmed in Objective-C for ~30 years

- implemented my own pre-processor and runtime (pre NeXT)

- programmed in the NeXT ecosystem professionally since 1991

- additionally, worked in Objective-C outside the NeXT/Apple ecosystem for many years

- worked with Rhapsody and with OS X since the early betas

- worked at Apple for 2 years, in performance engineering (focus: Cocoa)

- one of my projects was evaluating the GC

With that out of the way (and just like your 6 years, it has no actual bearing on correctness): which specific parts do you believe are inaccurate? I'd be happy to discuss, show you why you're wrong, or correct my post if you turn out to be right on something that can be verified (your opinion as to how awesome ARC is doesn't count).


I'm guessing LeoNatan has an issue with:

> it's at best a subtle difference and for most code you won't be able to tell the difference.

Which is a pretty dubious claim. I removed a lot of retain/release/autorelease calls when I moved to ARC. Perhaps I'm missing the OP's point...


Can you quantify "a lot"?

My personal frameworks consist of 205584 non-comment, non-whitespace, non-single-bracket lines of code. Of these, 304 contain a retain, 1088 an autorelease, and 957 a release. That's 0.15%, 0.52% and 0.46% of the code respectively, for a grand total of 1.13%.

I'd have a hard time calling around 1% of total code "a lot", especially since the bulk of that is very simple boilerplate and trivial to write, but I guess everyone is different.

Mind you, this is a less-than-optimal code base, dating back to the mid 1990ies, with much more "but I am special" code that does do manual management where it shouldn't. Code I write today, even without ARC, has a significantly lower R/R density, well under 1%.

However, even of that 1%, the bulk is (a) releases in dealloc and (b) autorelease in class-side convenience initializers.

Quite frankly, I really miss convenience initializers in typical ARC code, writing [MPWByteStream streamWithTarget:Stdout] is so much nicer than [[MPWByteStream alloc] initWithTarget:Stdout] that (a) I wish people would write convenience initializers even in ARC mode (my experience is that they don't) and (b) I wrote a little macro that will generate an initializer and its convenience initializer from one specification. It's a bit nasty, so not sure I'll keep with it.

For the releases in dealloc, I once wrote an auto-dealloc that grubbed through the runtime to automatically release all the object instance variables (with an exception list for non-retained ones). It probably would have allowed me to eliminate the bulk of releases, but somehow I just didn't find it all that advantageous, writing those dealloc methods was just not that much of a hassle.

What may be interesting here is that the fact that I had an alternative may have been instrumental to realising it wasn't that big a deal. Things seem a lot worse when you don't have an alternative (or feel you don't have an alternative).

The same applies to ARC itself, at least for me: before ARC was released, it was exactly the solution I had wanted, especially in light of the GC madness. Again it was once I had used it in practice that it really became obvious how insignificant of an issue R/R was.

The only way I can see of getting significantly higher than 1% R/R code is by accessing instance variables directly, either because you are writing accessors by hand (why?) or grabbing at those instance variables without going through their respective accessors (why?!?!?). In both cases: don't do that.

Yet, whenever I mention these straightforward facts (particularly the numbers), people jump at me. Which is interesting in and of itself. My only explanation so far is that people generally write much, much worse code than I can imagine, or that R/R looms much larger in the collective Apple-dev psyche than can be justified by the cold, hard facts of the matter.

My guess is that's it's a little of the former and a lot of the latter. As a case in point, one dev who had jumped at me on a mailing list came back to me a little later in a private mail. He had converted one of his ARC projects back to R/R and was surprised to find what I had written to be 100% true: the R/R portion of the code was tiny and trivial, much less than he'd imagined, and hardly worth noticing, never mind fretting about.

However, the collective paranoia around R/R and the RDF around ARC seems to be big enough that reality doesn't really stand a chance. Which is of course also relevant. Perception matters, and that's why ARC is important.


I think they had to go with ARC due to the requirement that swift interoperates with Objective-C. If that hadn't been a constraint, yeah it would be an interesting decision.


No - they already had GC working with Objective-C and could have chosen it for swift if they had thought it was the best technology.

Here's a quote from Chris Lattner:

"GC also has several huge disadvantages that are usually glossed over: while it is true that modern GC's can provide high performance, they can only do that when they are granted much more memory than the process is actually using. Generally, unless you give the GC 3-4x more memory than is needed, you’ll get thrashing and incredibly poor performance. Additionally, since the sweep pass touches almost all RAM in the process, they tend to be very power inefficient (leading to reduced battery life).

I’m personally not interested in requiring a model that requires us to throw away a ton of perfectly good RAM to get an “simpler" programming model - particularly on that adds so many tradeoffs."


Yes, they had GC working with Objective-C but there were so many problems with it that they dropped the GC in favor of ARC years ago. By the time Swift came along, GC with Objective-C was no longer an option.


Chris Lattner was already working on Swift when the decision to drop GC was made. Guess who made the decision? Chris Lattner. If anything, GC was dropped because of Swift, not the other way around.


And it was only an option on OS X, never on iOS.


Me too. Is something similar available in Rust? I'm familiar with smart pointers in C++ but I love that Swift manages a lot of that mess for me.


Generally data lifetime in rust is fully deterministic and the borrow checker can statically determine when data should be deallocated. If for whatever reason you do need reference counted semantics there are options in the stdlib (alloc::rc).


And to elaborate on this, there's a namespace clash: Arc in Rust is _atomic_ reference counting, and Swift is _automatic_ reference counting, which, even more confusingly, is implemented using atomic reference counting in my understanding.

Automatic reference counting inserts all of the refcount bumps. In Rust, you have to write the up count yourself, but not the downcount.

But, reference counting isn't used very often, at least in my experience. It's very useful when you have non-scoped threads, though.


As far as I can tell, the impl of Arc and ARC are actually basically identical at the high level, with the only major diff being ARC only keeps 32-bit counts on 64-bit (so they only waste one pointer of space).

Everything else is just where retain/release (clone/drop) calls are made. Rust is insanely good at not frobbing counts because you can safely take internal pointers and move pointers into a function without touching the counts at all. Swift has lots of interesting optimizations to avoid touching the counts, but it's hard to compete with a system that has so much great static information related to liveness of pointers and values.

As a simple example, consider this code (which is conveniently valid Swift and Rust, modulo snake_vsCamel):

    let x = make_refcounted_thing();
    foo(x);
This code in isolation will always not frob counts for Rust. This code may not frob counts in Swift, depending on what follows `foo`. In particular if foo is in tail position (or at least tail-position-for-the-life-of-x), then we can avoid frobbing, because we know foo's final operations will be to release its function args. foo may in turn punt releasing its function args to anyone it passes them to as a tail call. Note that Swift and Rust both have the usual caveats that "less is a tail call than you think" thanks to side-effecting destructors.

The takeaway is that Rust's default semantics encourage releasing your memory earlier, which in turn means less traffic on the reference counts. Particularly interesting is that in Rust, you can run a function arg's destructor earlier than the end of the function by moving it in to a local variable. In Swift, I do not think this is possible (but I wouldn't be too surprised to be wrong -- Swift has tons of special attributes for these little things).


Go is about as fast as java is with a lower memory requirement although.


I have a production app in the AppStore in 100% swift 2.2. To me it's the most exciting new language out right now. It does not have too many brand new features that other languages don't have but it's implemented most of those modern features in a very solid robust easy to understand and use way. It's functional but not purely, it's object oriented but has great ways to avoid the worst of the designs most of us have been bitten by in the past. We are in a Java, PHP, Node.js, and Swift shop. Swift is BY far the least bug prone and most stable code. It's fast and easy to understand.

It's not perfect, Generics and Protocols are fuzzy at best and if you aren't careful with optionals you can actively harm the stability of your code base. It has a long way to go on the server side but I do believe that it will and should be a great server side language platform. That said in my opinion it's the best language I've worked with professionally.

For reference I've professional written a decent amount of code in: Python Java C# C++/C Objective-C Perl PHP Node.js/JavaScript VB6/VB.net Ruby Groovy Scala


How are generics 'fuzzy at best'? I can understand why you could think that associated types in protocols can be a pain in the ass, but there are reasons why its done this way.


Sorry was generalizing. I meant generics and protocols in combination. The ability to define a protocol based on a generic would be fantastic. It's something that is solved in Haskell fairly well.


Isn't that somewhat possible with extensions and constraints? At least that was the impression given by last year's WWDC talk [1], unless you're thinking of something else?

[1]: https://developer.apple.com/videos/play/wwdc2015/408/


Can you elaborate more on Swift vs Scala? What are the strong and weak sides of each language?


Scala tries a lot harder than Swift to unify object-oriented and functional programming principles.

For example, in Scala operators are (IIRC) implemented as methods on objects, there's a 'Nothing' bottom type that is used for covariant generic parameterization, and ADTs are implemented using inheritance in the form of case classes.

Swift has no top-level object type or bottom type, and a lot of its more functional style features (ADTs in the form of 'enums', value types that enforce immutability) are completely divorced from the object-oriented part of the language.


Very similar languages IMO. Scala is a little heavier on the functional and academic fronts. It's syntax is also much heavier on symbols. In general I think scala has a steeper learning curve. I like scala though. Similar ideas at their core, a mixed paradigm approach. Similar also in that both have a legacy language that they need to interop with. That legacy also bleeds through to both pretty heavily. I also much prefer a native ecosystem vs the jvm.

The one big area that separates the languages is tooling. I personally think Swift's tooling is much better. Faster compilation, faster runtime, faster startup, and better ide tooling.


I'm a big fan. I love the statically inferred type system, generics, & optionals. Also really like a lot of the functional programming concepts + value types but still enjoy being able to fall back on OOP. It feels like the best of both worlds. I can't wait till we have language native concurrency techniques, so I can start writing swift in backend code.


I was disappointed to see that concurrency support won't be in Swift 3.0.


Me too, but I’d I realized that I’d rather the Swift team not half-ass such a big feature, especially when they’re working on ABI stability and translation of Foundation API’s.

Plus, I bet one of the things that’s holding proper concurrency back are Apple’s frameworks. I’d rather deal with callbacks for another year or two than use a rushed language feature.


Concurrency is done with libdispatch library, which is also open source. Right now, it's only compatible with OS X, but int their post, they said that "For Linux, Swift 3 will also be the first release to contain the Swift Core Libraries."

https://swift.org/blog/swift-3-0-release-process/


I'd put Go at the top of the languages to compare it to. Maybe Java as well. The ecosystem still needs some time, but Swift's potential on the server is excellent:

- It is incredibly fast compared to current interpreted languages (i. e. factor 30+ vs. Python, possibly faster than C)

- Linux & OS X, open source

- Typed, safe

People like node.js mostly b/c it allows some code to be written once and run on both the server as well as the client. Considering almost any Web API also has an IOS client, Swift has the same potential.


Every new language has someone claim it's faster than C. I'll believe it when I see it.


Lots of languages ARE faster than C, including older languages than C, like Fortran and Forth.

Being faster than C is not anything special in itself.

Most things being equal (typed, optimized, compiled, no runtime etc) C is mostly faster when it does something with a lower overhead than some other language (e.g. a specially written hashmap algorithm targeted to some program vs C++ std map type), not because of its primitives being faster.


Fortran especially. Forth seems to depend on processor characteristics. I seem to remember that Ada compilers routinely produce faster code than C compilers.


C has very loose semantics that make it difficult for a compiler to reason about code, as the saying goes, C is often little more than portable assembly. There's a reason that CLion was a big deal when JetBrains announced it, reasoning about C/C++ code is HARD.


If I wasn't more interested in Agent oriented languages, it might be interesting to see what could be done to design a new language built for performance. I would imagine you'd start with the Ada side of the house and work your way back to more pleasing syntax.


Ada and Rust are both significantly easier to perform aliasing analysis on, so frequently you can get much better code. Both languages have slight overhead for bounds checking and such, but you can turn it off (at least in Ada) if it's a problem (hint: it isn't).

Same reason Fortran is fast actually: it just disallows pointer aliasing entirely¹, meaning you get none of the flexibility of C pointers (heck, you don't even have pointers, basically), but if you're multiplying matrices it flies.

¹ I recall newer Fortrans have pointers, but as I'm not a Fortran programmer I don't actually know.


You can call get_unchecked() instead of [] to elide the bounds checking at any place that does it in Rust.


As someone mentioned below, C can almost be seen as a portable assembly. Given enough time and optimization it will always be as fast as the hardware allows. I guess my point was it is meaningless to say "faster than C" because it always depends on too many factors.


C used to be very slow for 8 and 16 bit home micros to the point hobby Assembly coders could write much better code.

It arrived where it is today thanks to 40 years of research and exploring UB in optimizations.


Both C++ and Rust beat C in many cases for various reasons.


Do you have a link to some benchmarks at hand? Your "possibly faster than C" claim sounds too good to be true without a source, but I'd very much like to be proven wrong :)


By http://benchmarksgame.alioth.debian.org/u64q/which-programs-... Swift's performance is similar to Java's, but quite a bit faster when working with many objects: http://benchmarksgame.alioth.debian.org/u64q/swift.html On a few benchmarks it's faster than C as well: http://benchmarksgame.alioth.debian.org/u64q/compare.php?lan... though not most.

I suppose they were able to do the binary-tree benchmark so much better than Java, because Swift supports 'UnsafeMutablePointers': http://benchmarksgame.alioth.debian.org/u64q/program.php?tes...

Edit: This Quora question mentions the performance of Swift before and after unsafe programs were included: https://www.quora.com/In-terms-of-performance-speed-is-Swift... The memory safe version used to be about 24x slower than C.


I don't think these benchmarks are realistic. Yes, you can use UnsafePointers, but that's not the real world case. ARC, runtime generics and structs have a huge cost in real world programs.

Swift is unfortunatelly usually an order of magnitude slower then Java and C# in real world according to last benchmarks I've made. I'm hoping that will change because I really love Swift.


Please contribute Swift programs that you think are "the real world case":

http://benchmarksgame.alioth.debian.org/play.html


Agreed. Swift is quite high-level compared to C or C++, for examples. As with any languages, there are tradeoffs with performance.


>> I suppose they were able to do the binary-tree benchmark so much better than Java, because

… because it's faster to use a memory pool than GC for that … ditto C Rust Ada Fortran C++ …

http://benchmarksgame.alioth.debian.org/u64q/performance.php...


>> Edit: This Quora question mentions the performance of Swift before and after unsafe programs were included

That Quora answer makes claims which are not true.

The Swift benchmarks game programs were always compiled with -Ounchecked (since Dec 7 2015).

The before was naive transliterations (from other programming languages) of single-core programs -- just to demonstrate that Swift was installed.

The after was someone doing the work and contributing programs written for Swift and written for multi-core.


Obviously keep in mind the various caveats that come with benchmarks, but it seems that Swift is at least capable of achieving C-like performance[1] in some circumstances.

[1]: https://benchmarksgame.alioth.debian.org/u64q/compare.php?la...


See https://gist.github.com/MatthiasWinkelmann/d1f19a11d539e609f... for something quick&dirty.

Note that I don't want to claim that Swift is faster than C in general/real life/anything but a few microbenchmarks. But the two just being within an order of magnitude makes a really strong case for Swift, I believe.


A bit of justification would be nice why you think Swift can be faster than C.


I made an app with Swift (my only iOS app). I like it. I like clean, easy-to-read languages where I can get stuff done quickly and reading code isn't a huge pain and using a new library isn't a huge pain. Swift seems to fit those preferences so far.

The debug messages were confusing. I never used Obj-C so I'm not sure if that's iOS or Swift that causes confusing error messages.


> The debug messages were confusing. I never used Obj-C so I'm not sure if that's iOS or Swift that causes confusing error messages.

While it has gotten a little better, it is Swift.

(EDIT:) But I've been loving Swift so far! I've found that there is some consistency to the weird errors, so eventually you can start to map them to past experiences.


I've been playing around with it a little and it's... fine? Very capable. Not run into any moments that have blown my mind, but neither have I found any that disgusted me.

In a weird way it actually reminds me most of TypeScript - JavaScript-y, but with types and stronger enforcement of rules.


I've been quipping that Swift is the Javascript of tomorrow, today!


For people coming from C/C++/ObjC, some things in Swift can take some time to get used to, e.g. if case .Success(let person) = personResult { ... } My first thought when I saw this was, "only mother could love this syntax", but later you come to appreciate and enjoy the syntax.

Link: https://www.natashatherobot.com/swift-guard-better-than-if/


I've used ObjC since NeXTSTEP 3.3, and I'm trying to work with Swift. At this point, I've classified it in the same vein as Transact-SQL, a language I need to learn and use, but one I will not love. I guess I love the selector syntax too much. Shame that F-Script never caught on.


I love guard statements, but Swift's `if case...` syntax is horrendous. I'm approaching a year of full-time Swift and I still struggle to get the syntax right on the first try.


For me the most exciting thing about Swift is the recent rumour that Google are looking at adopting it as a first class language for Android.

If that happened, you'd finally have an open, fast, expressive, modern language that you could use on the server and on the fronted for iOS and Android. Couple that with Typescript & Aurelia for the web frontend and I'd be in programming nirvana.


Having developed in Objective-C since iPhone was released, I am very appreciative of Swift. It really puts in a lot of effort to make it hard/impossible to write code that crashes. Swift is such a safe language to develop in, I was able to switch from ObjC to my first Swift project and release builds to my client for months without any reported crashes in my code.


The neat syntax is sometimes butchered by unnecessary casting orgies. Swifts outstanding feature is its simple C interoperability. The compilation times (Xcode) are lengthy compared to C or C++.

Example: https://youtu.be/mgpAmqdiPKE


Semantically, it's probably closest to Rust (from my limited exposure). The syntax is slightly changed, but most things have a direct parallel between the two languages. Although Swift doesn't do ownership/borrow checking like Rust does and is a lot more relaxed in that department.


Apple actually directly cites Rust as inspiration for Swift.


I tried it an pretty quickly went back to C#. The Apple-only nature was the major downside to me, so maybe these kinds of changes will help (assuming it sees wide adoption outside of Apple-land).

Check your local job boards, but if sell yourself as a Swift dev you're likely to be pigeon-holed into Apple-centric development for the foreseeable future.

With Apple sales and market share dropping like a rock in the last 12 months, that might not be the best position to put yourself in long term, career-wise.


>With Apple sales and market share dropping like a rock in the last 12 months, that might not be the best position to put yourself in long term, career-wise.

Like "a rock"? Where exactly did you see that?

Apple STILL sold 50M frigging iPhones in its "failed" quarter -- and had more profits and revenues that 3 next competitors combined.

And that with supply constraints for mobiles, Intel dragging its feet with laptop/desktop CPUs, and an atypical extraordinary last year-over-year quarter to compare to.

Here's the relevant chart: http://www.statista.com/statistics/263426/apples-global-reve...


That's rich coming from a C# developer looking at mobile...


You may have missed the announcement where MS bought out Xamarin and is now giving it away for free. You owe it to yourself to at least give it a try while waiting for actual cross-platform Swift.

90+% of your mobile code can be shared between Android and iOS. Not sure if that's something currently possible with Swift or not, but it was worth keeping C# around for our needs.

We're more of a "mobile app is something we also offer" and web is our main presence though, so your mileage may vary (especially if you're mobile-only or mobile-centric).


If there was a common native language between Android/iOS, it would make things easier for sure. But using a third language to solve the existing problem is a rookie error.

> 90+% of your mobile code can be shared between Android and iOS

This stat depends on how complex your app is, but it usually only applies to gaming. Otherwise it's almost always false.

We've already been down this path with the webview craze a few years ago. It was a total nightmare. These newer cross platform frameworks may no longer use webviews, but under the hood things are just as gross.

What is true though is that these cross platform frameworks always oversell themselves (with the exception of Unity3D). You end up trading in one problem for another.

Among the many problems you will encounter:

1. You miss a lot of newer features in the native platforms. If you want to incorporate those features somehow, the code becomes a conditional mess.

2. Performance is very meh and your hands are mostly tied in optimization. In both Android and iOS, getting performance correct (for example in a table view) requires a lot of tweaking. All of the cross platform frameworks I've used, including React Native, work for basic cases but quickly start dropping frames after that.

3. The majority of a mobile app's code is front-end, and this is not the place where you want to share code. Users on each platform expect different types of interactions and behaviors, not to mention UI aesthetics.

4. These different platforms all have their own characteristics and ways of doing things. These differences are not easily abstracted out. At the point where you are coding around these differences, you might as well have two different code bases. For all the complaining about differences between WebKit/Chrome/IE etc, the behavior is remarkably similar -- mobile platforms are far more diverged.

5. Native third-party libraries are very difficult to use because there is never the same library on the other platform. If there is, say with Facebook, they are not in sync.

6. You end up writing a lot of bridge code between iOS/Android and the framework. It's never pretty and painful to debug.

7. Getting locked into a third-party's framework is a bad place to be later on. Once it's in there, it's never coming out.

I could go on...

If you absolutely need to go cross platform early on, the best thing you can do for yourself is to lock down a common API/data model as early as possible, and create an aesthetic and design that is simple to implement on both platforms.


I can see you have no idea what Xamarin is. With Xamarin, you still use UIKit and the native Android UI, the difference is that you program it in C# and thus the non-UI code can be shared seamlessly. Performance is not "very meh" as the UI is completely native and Xamarin compiles the C# to native code ahead-of-time. It has literally nothing to do with WebViews.


Hey, you need to read the parent a bit closer.

Nobody's saying that Xamarin uses web views, but clearly there's a comparison to be drawn with the craze a few years back for writing mobile apps using web views. This was advocated as being cross-platform, allowing developers to write the same code and run it on multiple platforms. The downsides are the same in some ways, as the parent enumerated.

Xamarin's use of native UI is important, but in my experience it's clearly not as seamless as you think; while you access the UI natively, there are intrinsic architectural differences that make it difficult to do so in a high-performance manner without a lot of fairly hacky code.


Yes, I've read the parent closely. The downsides are still wrong, because they assume that Xamarin is a cross-platform UI toolkit, which is not true. The performance downside is wrong, the newer features downsides is wrong, the front-end code sharing is wrong, the third party libraries is wrong (Xamarin can generate bindings for other frameworks automatically), and the only debatable one is getting locked in a framework.


I haven't used Xamarin, but I've used a number of other frameworks (some webview based, many not). Some had benefits, but many had a lot of drawbacks.

I could write a long blog post about what a headache it is to support both platforms, but I don't think any of these frameworks solve the real pain points. It's generally a lot of small things which add up. It also takes a lot of team discipline.

Code sharing would be useful, but not massively since the majority of code is in the UI. When the UI is different, a lot of differences start to be needed in the non-UI code as well. Suddenly you are back at square one. Depends on the app though.


90%+ shared code is a bold claim, Xamarin.com states 60-90%.

In my experience it's closer to 60%.


And you must of missed when Apple open sourced Swift and released binaries for Ubuntu 15.10 and Ubuntu 14.04.


Completely false.

Neither sales nor market share have been dropping like a rock.

Apple had had their first year on year revenue decline in 13 years of continuous growth, but there is no indication that that anything is 'dropping'.


I didn't really want to get into the reasons why. I was just pointing out that investing in an Apple-only tech might be a bad idea because it sure looks to me (and the stock market) that Apple has peaked and is now on the decline.

However, since you said my statement was completely false without a source, I felt compelled to show my sources.

51 million iPhones sold compared to 61 million the same quarter last year a DROP of 10 million. iPad an Mac also down double digit percentages, but iPhone is the only thing that really drives Apple. Overall sales DROP from $58B to $50B Source: http://money.cnn.com/2016/04/26/technology/apple-earnings/in...

iPhone market share also DROPPED from 18.3% to 15.3% in Q1. Source: https://www.idc.com/getdoc.jsp?containerId=prUS41216716

Sure Apple makes a ton of money and profit still, but you can't really argue that it's not dropping like a roick in the past 12 months. If you put $1,000 into Apple stock a year ago today, you'd have about $700 today. Source: https://finance.yahoo.com/echarts?s=AAPL+Interactive#{%22all...

Let me know where I went wrong, but I'm showing double digit sales drops, a large market share drop, and 30% stock value drops in the last 12 months. I'm not sure how my post is "completely false".


There have been 30% stock price drops multiple times during the 13 years of continuous growth. They are clearly not correlated with the actual growth prospects, and if the stock market actually thought that Apple was dropping like a rock as you claim, you would expect to see a far higher discounting.

You mention the Mac, which as declined in absolute terms but has continued to grow relative to the declining market.

The same is true of the iPad, and there is evidence that it's decline is actually halting - I.e. it is reaching a plateau that is lower than its peak. Whether it will return to growth or not is an open question, but it is clearly not dropping like a rock.

So - the stock market story doesn't support your conclusion, nor does the iPad or Mac.

That leaves the iPhone. It is possible that the iPhone has reached a peak in terms of revenue.

Is it possible that it has reached an all time high in terms of active user base? That seems extremely unlikely.

For one thing, the total number of iPhones sold per year is still astronomical, and the devices have a long useful life. Even without a change of strategy, there will still be a huge number of new iPhone customers over the coming years.

Secondly there are many possible strategic solutions to a reduction in sales. Apple is selling far more SE devices than they anticipated, suggesting that there is pent-up demand for cheaper iPhones. They can address this segment easily, which will continue to increase the user base of IOS, even if revenue growth stagnates as a result of lower ASPs. This is just one possible strategy adjustment that would address the concern.

As I said, there has been a decline, but nothing supports your conclusion that Apple is 'dropping like a rock'. As such this is a bad conclusion on which to make a choice about whether to learn Swift or not.


This "dropping like a rock" quarter was Apple's third most profitable Q2 of all time and they made more money than Alphabet, Microsoft, and Facebook combined. The Apple Watch so-called "flop" is estimated to have outsold Rolex by $1.5 billion in the past 12 months. Their services revenue grew by 20% year-on-year.

At some point you have to realise that there's nothing Apple can do that would count as a success in some people's eyes. They are the most successful total failure I've ever heard of.


I apologize, as I seem to be failing to convey my message to you.

My assertion: Apple is dropping. Be careful about investing in Apple-only techs like Swift.

Your argument (I think): Apple is not dropping like a rock.

To support your argument, please provide a sources for the following:

- The last time Apple stock was down 30% from same day prior year.

- iPhone sales are not dropping > 10% (hence 'like a rock') .

- Mac sales are not dropping > 10% (hence 'like a rock').

- iPad sales are not dropping > 10% (hence 'like a rock').

- How iPhone + Mac + iPad all being down over 10% from prior year doesn't support my conclusion.


Your message isn't really that convincing. Apple, while it may be "dropping", is far from doing so "like a rock" – the platform is clearly huge and will very obviously continue to be so for quite some time. Added to that, Swift is not an Apple-only technology.


You are failing to convey bullshit that's all.


Perhaps Swift likes Actionscript? That's where a lot of professional left Actionscript for Scala and some invested and adopted Swift. Who cares about sales drop when it's not your job, when the main goals to get Swift on cross-platforms and IoT. Contents drive sales, you seen Windows and BB, Jolla phones have failed.

It's safe to predict Swift will supported in Android and I still prefer iOS for entertainment whereas I have a bad experience on many Android phones for years including it took years for Samsung and LG to release new ROM.


So abiding by that same logic, would you not agree that developing for Android has been a poor career decision up until say, 12 months ago?


What is Google-only about Java code?


Swift is fun. I'm having a great time playing around with it. There are some nuances that take some getting used to, but overall, it's a powerful, fast and fun language to work with.

If you are interested, I have taken on implementing all major design patterns in Swift. I've gone through about 10 of them right now, (7 published on my blog).

If you've never worked with Swift but wish to get a feel for it without going through overly simple tutorials, check them out: https://shirazian.wordpress.com/2016/04/11/design-patterns-i...


Are there any plans in place for locking down the syntax? Major release 3 is coming and developers are still being expected to hit a moving target.

One of the nice things about C is that K&R era C is just as valid as C written nowadays, but Swift appears to be going in the opposite direction, every major release adding and removing bits of syntax.

As someone unfamiliar with the language, it makes me not want to pick it up, since guides and documentation I read now will be incompatible with the newer release when it happens. Some early toying around during Swift's original announcement led to hard-to-debug errors (in part, caused by terribly useless error messages) when trying identical code on newer releases.


C got started in 1972, and K&R C dates from 1978 and includes compatibility-breaking changes.

You want a young language to change a lot, because you don't really know what it needs until you get it out there and people use it. You want to make necessary changes as early as possible to cause the least pain. Major changes at the two-year mark will be a lot easier than major changes at the five or ten year mark, and waiting too long to fix a bad decision will probably mean it sticks around forever.

I would expect Swift 3 to be the last release with major source-breaking changes.


However:

- Almost nobody actually used C prior to 1978.

- Most C compilers have supported both current and previous versions of the C standard via -std flags.

- Other languages with breaking changes, like Java, are similar. Java has always supported specifying the source language level via -src.

Swift is unique in breaking old code completely, without recourse other than semi-functional code rewriting tools.


Almost nobody actually used Swift prior to mid-2014. Did any 1978 C compilers offer both K&R and prior as options?

IMO the example of Java is a great illustration of why Swift shouldn't be tied to source compatibility too soon. You don't want to be stuck supporting your 1.0 syntax until the end of time, or you'll end up like Java!


Used everywhere, with decades old libraries that still function fine?


I believe Swift 3.0 is the last breaking release. Version after this will avoid breaking the API and language as much as possible, and I imagine allow backwards compatibility for changes they do make.

I'm okay with the fast-moving nature of Swift. It's a very young language that obviously needed a lot of changes, and if they had to take the time to make the earliest features available all the way in version 3, Swift would already be bloated. But they have been able to do away with backwards compatibility which is difficult in the industry, and I think it allowed them to make a richer language, not held back by anything.

Side note: For at least some of the changes they've made, they've also built into the compiler and Xcode smart error messages that see the old syntax, and allow you to click one button to fix it to the latest syntax. Such as the old `for i in 0..10 {}` to the new `for i in 0..<10 {}`, Xcode will tell you to basically insert the `<` for the new operator.


Certainly a problem they're aware of, and they're trying to get as many breaking changes done in 3.0 as possible. There may or may not be more in 4.0, but they will definitely be smaller.

> What does this mean looking forward? Well, Swift 2 to Swift 3 is going to be an unavoidably disruptive change for code since Cocoa renamification is going to land, and we’re going to be building impressive migration technology again. As such, we should try to get the “rearrange all the deckchairs” changes into Swift 3 if possible, to make Swift 3 to 4 as smooth as possible. While our community has generally been very kind and understanding about Swift evolving under their feet, we cannot keep doing this for long. While I don’t think we’ll want to guarantee 100% source compatibility from Swift 3 to Swift 4, I’m hopefully that it will be much simpler than the upgrade to Swift 2 was or Swift 3 will be.

http://ericasadun.com/2016/02/29/getting-ready-for-swift-to-...


Swift is still really young. Now is the time for these breaking changes. As someone who has been working with Swift since its public release, I haven't found the "moving target" to be that disruptive to my development. Even the cases that the migrator fails to handle have taken me less than an hour or so to fix myself on a fairly large Swift app.


While the language is nice, I find it hard to work with existing frameworks, given they are designed for a language as dynamic as objective-c. For example, NSError, NSNotification's userInfo is a `[AnyObject:AnyObject]`, but its member types are specified in documentation. It would be nice if we can have specific error will typed userInfo. Working with storyboard is the same story, there is no type check for VC and segue because identifier is a string, instead of something like `R.id.view` on android. Working with this kind of API requires lots of casting, I wonder will Apple design Swift-centric API later.


Apple has already done a lot to make this sort of thing nicer. They added lightweight generics to Objective-C, so arrays are typed now. NSError bridges to ErrorType. They've also announced further plans to Swift-ify existing Objective-C APIs.


Quite surprised to see no mention of Kotlin here since both languages are very similar, main difference is Swift is LLVM based while Kotlin run on the JVM and has excellent Java interoperability.

See http://fr.slideshare.net/andrzej_sitek/swift-and-kotlin-pres... for more details ...


I've never understood the fascination with Swift. What's wrong with Objective C?


Objective-C has no safe collections for non-object values. If you want an array of ints, for example, you either get to use a C array with lots of manual management and potential for error, or you use an NSArray of NSNumbers and pay for a bunch of overhead.

It has very poor support for custom value types. Objective-C structs basically can't contain object pointers, so they're limited to simple things like CGRect. They also can't contain methods, so you end up writing associated code in global functions.

Protocols are extremely limited. They're basically just collections of method declarations. There's no way to add functionality to every class which conforms to a protocol.

Objective-C is often verbose to the point of painful redundancy. Consider:

    NSString *x = [NSString stringWithFormat: @"%d", number];
Versus:

    let x = String(format: "%d", number)
Other examples include having to write the signature of every public method twice (once in the header and once in the implementation) and the need to do an annoying `self = [super init]` dance in every initializer.

There's almost no functional programming stuff available, like map and reduce. This is not strictly a language complaint, but since the standard libraries are pretty tightly woven in, I think it still counts.

Generics support is really limited. What's there was only added to interoperate better with Swift, anyway!

That's a quick overview of some of the ways Swift improves on ObjC. I'm sure there's more.


I'd add typed enums to the list of notable bug-avoiding improvements in Swift. I sometimes describe Objective-C as "all of the memory safety of C with the type safety of SmallTalk".


I may steal that quote; I think I like it better than calling Objective-C the "wild west of OOP".


We can even use string interpolation

    let x = "\(number)"


Don't get me wrong – I'm a big fan of Objective C, and I think it gets a far harder time than it deserves.

That said, Swift feels similar while being more productive:

- Lots of nice syntactic sugar to reduce noise - Optionals (and chained calls) which do much the same - Type inference! - A REPL. Pretty amazing. - Bounds checking

lots more nice features too.


Swift is much more type-safe (and modern overall). Also its OO-model similar to C++ with direct method calls instead of sending messages which results to better performance.


Objective-C likely suffers from similar problems of C and C++. It's just too easy to shoot yourself in the foot with these languages.

Rust, D, golang, swift all appear to share a common goal of having a mid-to-low-level imperative/OO language that has learned from C/C++/ObjC's mistakes.


[nil someMessage]

Though some consider this a feature, YMMV.


Too many times have I forgotten to initialize some member and just had things silently fail in strange ways. Swift checks for uninitialized "let" variables or non-optional "var"s


Am I the only one as a JS dev, who find it hard with types & casting in Swift?


Probably not. Types and casting seem complex to people coming from more dynamic languages – that's okay though! It formalises something that you generally don't have to think about in Javascript – the tradeoff being that you have more consistent, less buggy software at the expense of more effort while writing it.


Agreed. I am yet to formally make my app available but had tough times mainly while doing Ajax calls, typecasting the JSON i got. Infact took time to understand some concepts like ARC, Optional Chaining when you are from JS background.


> typecasting the JSON i got

This is where a typed serialization comes in handy. (Protobufs, Thrift, Cap'n Proto, etc.) It is also possible to use an IDL to generate a type-safe API to deal with JSON -- although I don't know of any tools that emit Swift. Interfacing with third-party JSON APIs can be a pain though.


Maybe typescript would be a good thing to experiment with first? You could then get used to types in a familiar language first.


Does "will be portable" include any notion of a cross-platform UI?


I think not, but im working on a application platform where the sdk is in swift. Im using the chrome runtime, so im binding into the chrome compositor, the same rendering engine that the webkit uses. This part is almost finished, and is rendering properly , i just need to finish creating the finished components(buttons, etc..) over the ui view.

Im trying to create a distributed application platform, using the bittorrent DHT for that.. so this is the part im working on right now.

I hope i can launch this in a couple of months from now, so swift can have a multiplatform UI and where applications can be distributed in a decentralized way.


Languages don't have UIs - frameworks do. I realise it's a pedantic distinction, but it's an important one.


I get the distinction, but there appears to be a deliberate direction to make Swift more portable. Thus, I was curious if that effort might include something UI related in the future. The interest level would certainly spike up if something were available.


But, that wouldn't make any sense to include UI components in a programming language. And I think the interest level is already "spiked".


Okay. I was under the (perhaps not true) impression that the vast majority of swift apps were iphone apps, with the apple specific UI bindings. Such that one of the first questions on a new platform (Linux, Android, etc) would be "how do I create the UI?".

It sounds like instead, the cross platform appeal is that back end or background services can now be written on non-apple platforms?


That's correct. UIKit isn't like, baked into Swift. Whatever platform you're writing for would need to provide to you UI components and then some.


Porting Foundation is much more important than any UI.


I don't think anyone is disputing that. Just asking if it's part of a larger plan, or not.


It would still be great if everything would be a value.


The removal of prefix and postfix ++ and -- operators and the removal of C-style for loops are mistakes, IMO.


With regard to C-style for loops - other than familiarity - why?

It always struck me as a fairly warty syntax.

Consider Python where you usually do:

    for item in iterator
If you want an integer sequence it's:

   for integer in range(1, 100)
and for those rare occurrences where you need an integer index as well:

    for index, item in enumerate(iterator) 
Swift seems to have a similar approach. Why would you ever want the C-style iterators?


For the simple case with a single iterator, yes the C-style for loop isn't as concise. However, there are lots of real-world situations where there is not a single iterator (e.g. looping through 2 arrays). Having to go back to 'while(boolean) { }' loops with initializers outside of the loop and incrementors at strange places inside the loop is much more confusing and error prone.


You can use the zip function to loop through 2 collections.

`for (l, r) in zip(c1, c2) {`

It is not simply motivated by making code concise, I would say `for num in collection.reverse()` is less error prone and clearer than `for (var i = collection.count; i >= 0; i--) { var num = collection[i] ... }`

The reverse collection iterator is computed lazily too, so there is no little perf overhead


Sometimes having those counting variables available is needed. You can use enumerate now but it introduces a lot of noise as now everything is a tuple with a number. It just makes expressing some algorithms messier and less readable although in general for sure "new style" loop is more readable and less bug prone.


I gotta disagree with this one. Those operators have always been confusing and unnecessary. j = i++ being different from j = ++i alone is enough to convince me it's gotta go.


Exactly. I am disappointed they're removing function currying/partial application syntax though :(


This was my first reaction as well, but then I realized I hadn't done much currying in Swift and half forgot it was a feature.

Maybe you can bring it back when we get hygienic macros (Swift 4.0? fingers crossed)


The currying was cool. I had to read the pull requests on that one and it seems that it was removed because it complicated language development. I'm hoping they bring it back in a modified more extensible way!


++i and i++ have their uses. Just because it is confusing to you doesn't mean it is confusing to others. There are things about swift that are confusing to me, but I don't out right say that it has to go.

Chalk this one up to shit hacker news says.


I believe something can be easy to understand (as is i++/++i) but also confusing.

Things that require a double take to read and parse do not help. Additional solutions to the same problem do not help.

There's a lot of cleverness that can come up in programming that may make for neat/short writing but that makes reading and purpose less clear. i++/++i helps with that with no special benefits.

I like how the Swift team approached the decision to remove them: thinking on whether it would make sense to add them had they not been there. And it doesn't, as they solve no particular problem; they're just a special, (not much) shorter solution to something that's already solved.


Too easy to get errors for array or objects loop outside the boundary that can be confusing for any teams dealing with large scale and when the requirements change frequently.


As a C++14 programmer, I disagree. for..in syntax is so much clearer and being able to simply specify the range without having to manually increment makes more sense to me. I would love to see something like Swift's stride in C++.


> prefix and postfix ++ and -- operators

IMHO, these operators are the worst features of C. Particularly I never understand the necessity of both i++ and ++i operators in language at the same time.


It's to make code more concise. There are many idiomatic one liners to compare arrays for example or to find the first element which satisfies the condition etc.

I don't mind removing them but their usecase is obvious: make the code more concise and more readable (and yes, if you program in C on daily basis you don't require a double take or thinking about what ++ does).


How does Swift compare to D?


Submitters: Please don't rewrite titles to say what you think is important about an article. Cherry-picking a single detail is a form of editorializing, which HN doesn't allow in story titles. The guidelines ask you to change titles only when they're misleading or linkbait, which wasn't the case here.

If you think one detail is most important, you're welcome to comment on that in the thread. Then your opinion is on the same level as other users'.

(Submitted title was 'Swift 3 will be portable / be able to run on more platforms'.)


Enforcement of this particular policy is one of the things I dislike most about HN. The current title is awful. It gives no information that the URL itself does not and doesn't explain why I'd care about the contents now given I've visited the repo in the past. I'm not opposed to the removal of editorializing but it should get replaced with something neutral that still captures why the URL was submitted, e.g. "Swift 3 Roadmap Update".


Agreed. Please, don't let HN become Reddit.


[flagged]


You've been posting quite a few unsubstantive comments to HN. Please don't do that. We're looking for civil, thoughtful discussion here.

https://news.ycombinator.com/newsguidelines.html

https://news.ycombinator.com/newswelcome.html

We detached this subthread from https://news.ycombinator.com/item?id=11660700 and marked it off-topic.


What do you mean by different level of retard? You mean it's easier to shoot yourself in the foot with Obj-C?


There are too many projects named 'swift'.


A bit too late for wider adoption, isn't it? Are there examples of similar path to openness?


I don't see why that would be the case. The language has been public for less than two years, and I don't see any reason it wouldn't become widely adopted for situations where it's a good fit.


Not so, every years students and millennials are graduated and users are relied heavily on mobile on the move and oversea. Get ready to watch WWDC 2016, the demand is still very much active.

Actionscript did open source, it create lot of unhappiness in the community and poor IDE performance but Swift is a different story, I'm looking forward to have Swift for server as long as the vendors contribution are growing, MS could be keen.


Swift is great language and an enormous leap forward in the apple-sphere. I'd like to see a bigger investment in JavaScript everywhere. I'd like to see apple make a push to allow writing 100% javascript apps for iOS and mac. I think it could be a huge differentiator for iOS and Mac development and it would bring to Apple the largest developer base out there.


Starting with OS X Yosemite, you can use JavaScript for writing Mac apps and scripts. https://developer.apple.com/library/mac/releasenotes/Interap...


Using JS for automation is a great start. I don't think they have a way to build a JS app that leverages the full capabilities of either Mac or iOS yet.


You can write full OS X apps in JavaScript:

http://tylergaw.com/articles/building-osx-apps-with-js

JavaScript bridging works on iOS too!


I will be happy without that "largest developer base" brought to iOS. Anyone whos thinks Javascript it the best language ever deserves to keep programming in it.


The 'best' language is subjective. There are many apps where JavaScript could be one of many 'best' choices and some where it certainly couldn't be. Either way, you don't have the choice today.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: