If you have a chance to chat with the staff again, I notice their marketing language says "Nearly perfect spectrum matching with daylight" - but they don't publish a Spectral Similarity Index. They only claim a (relatively low) CRI of 90+.
Edit: In other materials, they claim a very high CRI of 95+. Also the advertised wattage is sometimes 400W, other times 500W.
I have a 200W LED flood light and it was the most depressing thing to shine indoors. EVERYTHING was covered in dust and filth that you wouldn't see under normal-intensity light. It felt like things had collapsed and I was surveying the remains of someone's house.
It appears they are re-engineering their product, as they've taken down their sign up link and their landing page now advertises an upcoming, more traditional spreadsheet product: https://subset.so/
Their docs are still up, however, and has screenshots from their old infinite canvas spreadsheet product: https://docs.subset.so/
You should check out https://brev.dev. You can rent GPUs, pause instances, use your own AWS/GCP accounts to make use of your credits, and the CLI lets you use your GPU as if it’s on your local machine.
Check out the Creativity Faucet by Julian Shapiro. It’s a brief take on the same phenomenon: you’ve got to let the tap run and let the wastewater out before getting to the good stuff.
That’s because, in part, the good stuff is made out of identifying why the wastewater fell short.
If you want to get starting building with AI right away, I'd recommend getting an OpenAI key and playing with prompts in a tool like Everyprompt. Start asking it whatever you can think of, click Deploy to make an API out of it, then try building apps based on that.
You can then try running Stable Diffusion and using that in your apps too.
These other answers, while well meaning, are for building models and not for practically working with AI. They're like being given resources on how compilers work when someone asked how to write "Hello World."
Once you get that magic feeling of having something working, you can always dig into all of the research, like https://course.fast.ai, later.
The computer doesn’t have to do these locally—there’s no reason this couldn’t be implemented as a thin client.
I agree that data has to be converted, but I don’t agree that it has to be as manual as it is today. Consider this: In addition to your API, a second JSON file gets generated that describes the schema of the first. On the consuming developer’s end, their language & IDE uses that schema to let you call APIs or RPCs exactly like they were local functions.
I've been hiding in a cave for a long time, metaphorically. I've never had to deal with JSON or XML for that matter... I'm shocked that there's no metadata with type information at the top of either one? That's insane... everything is a gosh darned string? Ick!
If you have function calls without type information, anything can happen, a function call could result in your neighbor's pool being drained, and your bank balance being sent to Belize.
In Pascal, you have to have a type before you can declare a variable. This little inconvenience saves you from an entire class of errors.
If I'm going to import a random file from the internet, I have to be sure of the type of data in it before I'm going to touch it with a 10 foot pole (or barge pole in Britain).
I had no idea people got so foolish.
Back to your idea, of course there should be type information, either as a separate file, or at the head of the file.
In Pascal, I'd have the import routine check it against the RTTI (run time type information) of the local native structure as part of the import, and throw errors if there were problems. On export, the RTTI could create the type header file/section of JSON.
There are ways to specify schemas for JSON and XML. However, for APIs, XML's gone and schemas for JSON never caught on.
JSON is a serialized JS object, which itself is untyped, so anything can be in any order. Think NoSQL databases. This simplicity gave JSON the ability to be adopted by a multitude of languages very, very quickly. Foolish, maybe, but you could build Stripe on it.
GraphQL, an emerging API standard, does feature schemas, and its rise is bringing types back to APIs. It can be a bit more work to implement than JSON, though.
It's the classic complex-simple-complex pendulum swing. We're not done, either.
A big issue there is accessibility to a broader audience—something that Emacs and Lisp Machines don't quite have. Like you said, iOS Shortcuts is a very interesting example. You could say Internet-native automation tools like Zapier and Integromat fit the bill here too.
I wouldn't see potential maintenance or QA work as a deterrent to mainstream adoption. Users would underestimate it just like developers today do.
Listening to his recording of the subthalamic nucleus—as he describes it, "the sound of hundreds of thousands of your own neurons excitedly relaying the information they carry to one another fill[ing] the room"—was worth the read alone.
Highly recommend reading this post.
To the author: thank you for sharing, and I wish you a successful course of treatment.
Yeah that was super interesting and when I listened to the recording the first thing I thought of was the high frequency (well, relatively speaking of course) clicking sounds dolphins and some other cetaceans make. Always wondered if there was a connection there when brain information is translated “in the raw”.
The article’s position comes down to “no fundamentally new way to program would make sense for today’s programmers to switch to it”, and gives examples like the platforms of the no-code movement.
From previous generational leaps, we’ve learned that the users post-leap don’t look like the pre-leap users at all. The iPod’s introduction brought about a generation of new digital music users that didn’t look like the Limewire generation, and the iPhone’s average user didn’t look like the average user of the BlackBerry before it.
Modern programming is at the core of HN, and of most of SV, sure. That said, we should still be the first to realize that a successful, fundamentally new way to program would target a new generation and idea of software maker, one that won’t look like the modern developer at all.
Exactly. A paradigm shift implies new mental models and new metaphors for our abstractions that might not be valuable to people who think our current abstractions serve us well.
A great example of this is the fact that we still use the metaphor of files and folders for organizing our source code. The Unison language works directly with an AST that is modified from a scratch file[0]. For people committed to new models of distributed computing, that makes sense; for everyone else, it might be seen as an idea that messes with their current tooling and changes existing and familiar workflows.
I think the really big leaps forward are going to go well beyond this and they will look like sacrilege to the old guard. New programmers don't care if a programming language is Turing complete or if the type system has certain properties, they only care about working software but existing programmers are dogmatic about these concepts. I think the next leap forward in programming is going to offend the sensibilities of current programmers. Having to break with orthodoxy to get a job done won't worry people who don't know much about programming tradition to begin with.
Perhaps. Or it'll be like civil engineering or something, where the fundamental principles really stay similar even as technology/theory dramatically improves.
“I think the next leap forward in programming is going to offend the sensibilities of current programmers.”
Honestly, programmers have been railing against progress ever since the first machine coders shook their canes at those ghastly upstart programming languages now tearing up their lawns.
Meanwhile, what often does pass for “progress” amounts to anything but:
> we still use the metaphor of files and folders for organizing our source code
We don't use the metaphor for storing things. What we use is an hierarchical naming scheme. This makes sense for a number of use cases, and has been independently discovered multiple times through the short history of computing.
You may call the nodes files and folders. It is however just a word, a parable for the underlying data structure, which is the physical reality. You could just as easily call it something else. And many people, whose first language is different from yours, probably does.
Wow, thanks for sharing Unison, seems super interesting! I've been thinking about content addressed code compilation lately that could allow one to have all versions of a program within a single binary. Apparently there are other benefits to it. Can't wait to learn what they have discovered!
I've played a little bit with Unison and it's definitely very interesting and a little bit of a new paradigm (some people compare their "images" to Smalltalk images, but I think they differ enough to be considered as distinct paradigms)... but they're still working on very basic things, like how to enable people to do code reviews when the committed code is an AST, not just text... and how to actually distribute such software (I asked in their very friendly chat but apparently you can't run the code outside the interpreter for now, which I think is written in Haskell)... also, the only help you can get writing code is some syntax highlighting, even though the ucm CLI can display function docs and look up functions by type, for example, similar to Haskell's Hoogle (but in the CLI!!). ... so just be aware it's very early days for Unison (and they do make that clear by "forcing" you to join the #alphatesting Slack chat to install it, which is a great idea IMO as it sets expectations early).
Re: "A great example of this is the fact that we still use the metaphor of files and folders for organizing our source code."
I agree 100%! Trees are too limiting. I'm not sure we need entirely new languages to move away from files, we just need more experiments to see what works and what doesn't, and add those features to existing languages & IDE's if possible. I don't like the idea throwing EVERYTHING out unless they can't be reworked. (Files may still be an intermediate compile step, just not something developers have to normally be concerned with.)
I believe IDE's could integrate with existing RDBMS or something like Dynamic Relational, which tries to stick to most RDBMS norms rather than throw it all out like NoSql tried, in order to leverage existing knowledge.
Your view of source code would then be controlled by querying (canned and custom): bring all of aspect A together, all of aspect B together, etc. YOU control the (virtual) grouping, not Bill Gates, Bezos, nor your shop's architect.
Most CRUD applications are event driven, and how the events are grouped for editing or team allocation should be dynamically determined and not hard-wired into the file system. Typical event search, grouping, and filter factors include but are not limited to:
* Area (section, such as reference tables vs. data)
* Entity or screen group
* Action type: "list", "search", "edit", etc.
* Stage: Query, first pass (form), failed validation, render, save, etc.
And "tags" could be used to mark domain-specific concerns. Modern CRUD is becoming a giant soup of event handlers, and we need powerful RDBMS-like features to manage this soup using multiple attributes, both those built into the stack and application-specific attributes/tags.
Then why hasn't this happened over the past 40 years? That's more than one generation of programmers over a whole lot of change from mainframes to PCS, the web, mobile devices and cloud services with thousands of programming languages and tools being invented over that time, but mostly it's incremental process. PLs today aren't radically different than they were in the 60s. It's something visionaries like Alan Kay have repeatedly complained about.
New paradigms emerge when people think differently about the problems to solve or try to solve new problems. It makes sense to me that it might take more than one generation of people working on a similar set of problems before we have significantly different solutions.
> A great example of this is the fact that we still use the metaphor of files and folders for organizing our source code.
I think there's something akin to a category error here.
First, let's agree that we do want to organize our source code to some degree. There are chunks of source code (on whatever scale you prefer: libraries, objects, concepts etc) thare are related to each other more than they are related to other chunks. The implementation of "an object" for example consists of a set of chunks that are more closely related to each other than they are to any chunk from the implementation of a different object.
So we have some notion of conceptual proximity for source code.
Now combine that with just one thing: scrolling. Sure, sometimes when I'm working on code I want to just jump to the definition of something, and when I want to do that, I really don't care what the underlying organization of the bytes that make up the source code.
But scrolling is important too. Remove the ability to scroll through a groups of conceptually proximal code chunks and I think you seriously damage the ability of a programmer to interact in fundamentally useful ways with the code.
So, we want the bytes that represent a group of conceptually proximal code chunks to be scrollable, at least as one option in a set of options about how we might navigate the source code.
Certainly, one could take an AST and "render" some part of it as a scrollable display.
But what's another name for "scrollable bytes"? Yes, you've guessed it: we call it a file.
Now, rendering some subset of the AST would make sense if there were many different ways of putting together a "scroll" (semantically, not implementation). But I would suggest that actually, there are not. I'd be delighted to hear that I'm wrong.
I think there's a solid case for programming tools making it completely trivial to jump around from point to point in the code based, driven by multiple different questions. Doing that well would tend to decouple the programmer's view of the source as "a bunch of files" from whatever the underlying reality is.
But ... I haven't even mentioned build systems yet. Given how computers actually work, the end result of a build is ... a set of files. Any build system's core function is to take some input and generate a set of files (possibly just one, possibly many more). There's no requirement that the input also be a set of files, but for many reasons, it is hellishly convenient that the basic metaphor of "file in / file out" used by some many steps in a build process tends to lead to the inputs to the build process also being files.
I wonder if these's a way to deal with these concerns with a different visual approach that's better than files. I haven't seen one, but am still curious. Jetbrains' IDEs code navigation (ctrl+b to go to definition etc) are a step in that direction, but ultimately, the scrollable areas are still separated into files, even though you can navigate between them more directly.
I wonder how much of this comes from how tightly "programming" has been defined as/synonymous with "writing code".
I have two family members who brought up much of their job was "custom formulas in excel". They would not call themselves programmers, but they'd learned some basic programming for their job.
I wonder how much "Microsoft Flow Implementer" will become its own job focus with more and more people getting access to Teams.
Former Limewire developer here. I definitely had an iPod mini prior to Limewire's peak years, counted by number of monthly peers reachable via crawling the Gnutella network.
That’s very interesting to note. I imagine that the popularity of the iPod led a lot of new people to Limewire before the iTunes Store and Spotify took off, pushing its true peak to be a lot later than most (including myself) might recall.
The “Don’t steal music” label on every new iPod might as well have been a Limewire ad.
We had an internal URL for a graph of daily sales. Sales definitely went up every time the major record labels put out a press release that they were planning to sue because of the amount of music available.
> What's your take on how everything works these days?
Mostly, I wish technologies to make unreliable P2P transfers more robust had been widely applied to point-to-point transfers. I wish my phone, for instance, used low-data-rate UDP (with TCP-friendly flow control and a low priority IPv6 QoS) with a rateless forward error code (such as Network Codes) to download updates. There's no reason an update download should just fail and start over if WiFi is spotty or you move between WiFi networks.
Power, CO2, and cost efficiencies of scale due to centralization are nice. The shift to mobile makes P2P more challenging, see Skype switching to a more centralized architecture to make mobile conversations more stable.
I wish we had somehow come to a point where users were incentivized to use P2P programs that marked P2P traffic using the IPv6 QoS field to flag P2P traffic, rather than relying on heuristics to try and shape traffic. Using heuristics to shape traffic incentivizes P2P traffic to use stenography and mimic VoIP or video chat, making everything less efficient. Monthly data quotas at different QoSes, after which all the traffic gets a low priority, would incentivize users to use programs that explicitly directly signal traffic prioritization to the routers.
Comcast's traffic shaping attempts using heuristics seem to have caused it to forge RST packets when Lotus Notes (mostly used by enterprises) downloaded large attachments, breaking attachment downloads.[0]
> Do you miss p2p?
Sometimes.
> Do you think we could ever get back to it?
I think that really depends on corporate censorship (with and without government pressure) trends in the near future, and how hard the average person wants to push back. I think P2P is unlikely to see a major resurgence any time soon.
I think a better criterium for "peak years" is peak momentum (user increase over time), vs peak users.
Most technologies reach peak users when they stop growing or when their growth stops accelerating (and then they either get somewhat stable -e.g. Microsoft, or like many, start to decline - e.g. Blackberry).
Limewire's peak years were immediately before settling a lawsuit. When the lawsuit threats started, the owner panicked, put out a press release saying that he would shut down the company right away, then back-peddled and announced he'd fight the lawsuit. There was a ton of noise in the press and lots of media speculation about what was going to happen, which generated tons of free advertising.
Part of the settlement was using the auto-update feature to update the vast majority of users to a record-label-developed application that was skinned to look like LimeWire. (I left a year or two before the killing update, but as I remember, they made one release that removed the optionality from the auto-update, waited for the majority of users to update, and then force-updated everyone.)
LimeWire's fall in popularity didn't look anything like a normal decline.
I don't think the media frenzy free advertising was intentional, but the owner was a bit of a mad genius. He has tons of ideas, 80% of which are batshit crazy, and 1% of which are out-of-the-box brilliant. He has a few people close to him who are good at picking out the uncut diamonds. He also founded and runs a very successful hedge fund. On the other hand, he emailed everyone a paranoid email and then talked to journalists about it when it leaked[0].
I agree with the overall point but I think you're using the wrong example. Music consumption didn't really change with the ipod, it changed with abundant mobile data that removed the need for locally-stored files (and hence their management). You can argue that the introduction of iTunes changed the game, which it did a bit, but imho mobile data is what fundamentally altered the field. Imho the move really was cd -> mp3 (filesharing/iTunes) -> streaming to mobile.
When I bought a 40gb music player in 2005, I stopped downloading songs and started downloading discographies of entire artists and labels. The change that came with streaming services wasn't the first one.
I'd download entire music libraries (from Soulseek where you can browse people's shared files). I'd look up something I liked and assume someone who likes it has good taste :) And download whatever else seemed interesting. Thus my iPod became a vehicle for discovering new music.
I think the next paradigm shift in programming is the shift from local to cloud IDEs.
I see a lot of backlash against that idea these days, but it seems inevitable.
I don't think we can predict the full consequences of that, but one I see already is massively lowering friction. If the cloud knows how to run your code anyway, there's no reason why the fork button couldn't immediately spin up a dev environment. No Docker, no hunting for dependencies, just one click, and you have the thing running.
The next generation of programmers (mostly young teenagers at this point) is often using repl.it apparently, and building cool stuff with it. This is definitely promising for this approach, as the old generation will pass away eventually.
I work with some folks who use Brewlytics (https://brewlytics.com/). It's basically a way to use logical modeling to automate tasks, actions, and pull and push data for said automation. It's parallel to programming - these folks are using iterators, splitting and recombining fields, creating reusable parts out of smaller parts. They basically ARE programming, but almost none of them know anything more about programming than Hello World in Python.
We don't call people working in excel programmers though, not even themselves do that. That is the thing, we create a ton of wonderful no/low code tools, but then we create different jobs from programming since the programming is no longer the hard part the role is no longer a programmer.
IMHO programming language design is (or at least should be) guided by the underlying hardware. If the hardware dramatically changes, the way this new hardware is programmed will also need to change radically. But as long as the hardware doesn't radically change (which it didn't so far for the last 70 years or so), programming this hardware won't (and shouldn't) radically change either. It's really quite simple (or naive, your pick) :)
Yes indeed, and after had hit the reply button I was thinking about GPUs. But if you look at how a single GPU core is programmed, this is still served pretty well with the traditional programming model, just slightly enhanced for the different memory architecture.
With "radical changes" I mean totally moving away from the von Neumann architecture, e.g. "weird stuff" like quantum-, biochemical- or analog-computers.
> That said, we should still be the first to realize that a successful, fundamentally new way to program would target a new generation and idea of software maker, one that won’t look like the modern developer at all.
Channeling Gibson[1], do you see any potential successors already out there?
[1] “The future is already here – it's just not evenly distributed."
Real leaps can be distinguished from hype by where the passion is coming from. The fact that the movement’s passion is coming from actual, paying users and not just no-code platform makers is key here.
It’s rapidly creating a new generation of software creators that could not create software before, and it’s improving very, very fast.
I notice, though, that your examples are not from programming at all. Your examples are about users of devices. True, programmers use languages, but programming is far more complicated than using a music service.
Something like "no code" may make programming easier... until it doesn't. That is, you get to the point where either you can't do what you need to do, or where it would be easier to do it by just writing the code. If the "no code" approach lets you write significant parts of your program that way, it may still be a net win, but it's not the way we're going to do all of programming in the future.
"I notice, though, that your examples are not from programming at all. Your examples are about users of devices. "
Just to level set - as a program manager when I engage with programmers it's not because I want to buy programmers.
I want the fruits of their labors.
Let me put it another way - programmers love to bemoan the way users abuse Excel. Users abuse Excel because it meets their needs best, given all other factors in their environments.
If things like no code environments progress where they can provide at a minimum the level of functionality Excel can for many tasks then it will take off. No, it won't be "all of programming" but enough to be a paradigm shif?
OK, take Excel. It provided a way for a lot of non-programmers, who didn't want to become programmers, to program enough to get their work done. And that's great!
But if you look at a graph of the number of people employed as programmers, and you look for the point where that number started to decline because Excel made them unnecessary, well, you don't find it. Excel made simpler stuff available for simpler problems, but it didn't address bigger problems, and there were plenty of bigger problems to go around.
And when we talk about fundamentally improving programming, we aren't talking about improving it for those trying to solve Excel-level problems. (That's still worth doing! It's just not what we're talking about.)
So if you can create a new Excel for some area, it will take off. And that's great, for the people who can use it. It won't be all of programming, but it will be a paradigm shift for those who use it.
Will that be a paradigm shift for all of programming? Depends on how many people use it. My bet would be that there is no Excel-like shift (or no-code shift) in, say, the next 20 years, that will affect even 30% of what we currently recognize as programmers.
(If you introduce a great no-code thing, and 10% of current programmers shift to use it, and a ton of newcomers join them, that still only counts as 10% by my metric, in the same way that we don't really count Excel jockeys as professional programmers.)
Generational leaps emerge in the same ways everywhere.
For any space, if you provide a large enough net win for a large enough number of people, you introduce a generational leap. Very often, those people are completely new to the space.
The measure here isn’t how many growing companies that started with no-code adopt code as they grow. The measure here is how many growing companies that started with no-code wouldn’t have been started otherwise.
This claim is resting upon flimsy metaphors only. Of course, tautologically, each new wave of tech has some differences in demographics, and old people are slow to learn new paradigms, and generations differ, BUT ultimately we have no idea if/when there will be a new wave and how different its users will be. It might very well happen that the demographics don’t change as much, as the programming profession is already one of the most fragmented and eclectic, and attracts people whose primary virtue is manipulating logical abstractions.
I think you are taking the weakest possible extrapolation of the article's position and attacking that.
This article is about changing techs for an existing product. And the author is correct; tech changes are very costly for existing products. You have to weigh the cost of the rewrite. Swapping out your markdown parsing library is probably relatively low-cost. Swapping out your web framework is potentially years of work for no practical gain.
Most of us aren't working on new things. Day 2 of a company's existence, you already have legacy code and have to deal with things that were built before.
Removing barriers of entry comes with its own problems. Today we see that horrible error-prone excel sheets that are created by non-programmers wasn't a great idea at all. Similarly, many web developers don't understand performance and we end up with bloated sites / electron apps.
I think lot of progress will be incremental. Seemingly "revolutionary" ideas like light table break on modestly real world stuff. Function programming is elegant and all unless you hit a part of problem fundamentally imperative or if there's a performance problem. I think programming progress will be incremental, just as industry continues to mature.
>Function programming is elegant and all unless you hit a part of problem fundamentally imperative or if there's a performance problem.
Most functional languages allow you to do imperative stuff. so this is not an issue. They just usually provide an environment where the defaults guide you to functional (immutable by default, option/result types instead of exceptions, making partial application of functions and piping easy, etc.).
A prime example would be F#. You can program pretty much the same as in C# if you need to, but there are a lot of facilities for programming in a more functional style.