Hacker Newsnew | past | comments | ask | show | jobs | submit | spencerchubb's commentslogin

Why do TUI developers insist on doing such weird stuff when they could just make a GUI


Presumably preference of their users. From what I know, other than for cursor, the GUI interfaces are less popular than the TUI ones. Personally I also did not expect that I would really like the TUI experience, but it's hard for me to switch away from it now because it has become so central to my workflow.


It's easier to ship a TUI app cross-platform, the constraints around UI and state are often simpler, and some good libraries/frameworks (e.g. [1][2]) exist to make a modern-looking UX.

[1]: https://github.com/charmbracelet/bubbletea

[2]: https://github.com/Textualize/textual


I considered a GUI for a small Python project of mine, but couldn't find anything quick, simple, and portable. I ended up opting for a TUI with a few ASCII art boxes.


For quick and simple, by all means do a TUI. I have done it too, and they're super easy to vibecode :)

Claude Code seems neither quick nor simple


Because making a decent GUI is harder than making a decent TUI. Also TUIs give you some nice things for free like working over SSH easily, but I suspect the lower dev effort is the big thing.


I don't think this is true at all. Off the top of my head, the only cli ui that seems more usable as the gui equivalent is magit.


As a developer, not as a user. I mean, I also prefer TUIs as a user, but that's not the point I was trying to make.


you think so? i think making a good TUI is a pain in the ass


They are both not easy to make great, but with TUI you have way more constraints than with GUI so you can make something decent quickly and focus on important interaction and not on pixel-perfect button alignment.

Windows 98-XP GUIs were the best for such cases: there were clear design guidelines, everybody used native components, and GUI designers in IDEs were practical.


I think making a TUI or GUI is a huge pain, but having tried both I think writing a good enough TUI is easier. I suspect writing an actually good TUI is still easier than writing an actually good GUI, but I will caveat that with my lack of experience.


I'm not a TUI developer but I'm about to become one after my experience with Tauri on a simple project. But focus on why I'm a TUI user. Maybe these reasons are why people develop TUI apps:

* Speed: Work gave me an i5. It has lots of RAM after I begged for it, but it's pretty slow. Having TUI apps for programming (vim+aider-ce/opencode), git wrangling (lazygit), music (pyradio), etc. saves a ton of RAM and cpu for me.

* Availability: I use yakuake as my main terminal, so when I don't need those apps they aren't cluttering up my desktop, but when I need them they are immediately available with a tap to F-12. No matter what desktop I'm on, there's my workspace.

* Configurability: Most of these apps are ridiculously theme-able, and that's really fun.

* UX: Most of the apps I use use vim bindings. That makes it super easy to get around. I rarely have to touch a mouse.

* Simplicity and portability: My coworkers spend at least a day setting up a new laptop. Yeah they're probably milking it but I'm up and running in a few hours.

* Potential: I've barely touched the surface, but I think there's a lot of compartmentalization of projects to be done with multiplexers like tmux that would be difficult-to-impossible to do with regular GUIs.

* Speed: Apps start and stop in fractions of seconds vs watching a spinner go 'round

* Cool factor: My girlfriend thinks I'm pretty disgusting when she sees how many browser tabs I have open but she thinks I'm pretty hawt when she sees how many terminal tabs I have open.


use firebase cloud functions free tier


companies that actually care about security have a more secure solution and don't allow devs to use pypi


You’d be surprised by the amount of companies handling critical infrastructure that are OK with using PyPI directly


He said companies that care, not companies that should care but do not.


really depends on the company. my company cares a lot about security because it's a huge fortune 50 company with sensitive data and a lot of reputation could be lost with a security scandal


That is somewhat terrifying


For example we have it behind a kind of transparent proxy, where you get only packages which were tested and scan by a team of experts.


Could you give some examples of more secure solutions?


jfrog is the one my company uses


How do you decide what externally available packages to store/cache in artifactory?

I’m curious, as I also deal with this tension. What (human and automated) processes do you have for the following scenarios?

1. Application developer wants to test (locally or in a development environment) and then use a net new third party package in their application at runtime.

2. Application developer wants to bump the version used of an existing application dependency.

3. Application developer wants to experiment with a large list of several third party dependencies in their application CI system (e.g. build tools) or a pre-production environment. The experimentation may or may not yield a smaller set of packages that they want to permanently incorporate into the application or CI system.

How, if at all, do you go about giving developers access via jfrog to the packages they need for those scenarios? Is it as simple as “you can pull anything you want, so long as X-ray scans it”, or is there some other process needed to get a package mirrored for developer use?


Where i am, every package repo - docker, pypi, rpm, deb, npm, and more - all go through artifactory and are scanned. Packages are autopulled into artifactory when a user requests the package and scanned by xray. Artifactory has a remote pull through process that downloads once from the remote, and then never again unless you nuke the content. Vulnerable packages must have exceptions made in order to get used. Sadly, we put the burden of allowances on the person requesting the package, but it at least makes them stop and think before they approve it. Granting access to new external repos is easy, and we make requesting them painfree, just making sure that we enable xray. Artifactory also supports local repos so users can upload their packages and pull them down later.


You seem to be misunderstanding why a website would make llms.txt

Obviously, they would not make it just for an AI company to scrape

Here's an example. Let's say I run a dev tools company, and I want users to be able to find info about me as easily as possible. Maybe a user's preferred way of searching the web is through a chatbot. If that chatbot also uses llms.txt, it's easy for me to deliver the info, and easy for them to consume. Win-win

Of course adoption is not very widespread, but such is the case for every new standard.


The point of LLMs is they are able to make sense of the web the same way humans can (roughly speaking); so why do they get the special treatment of having direct, ad-free, plain text version of the actual info they’re looking for, while humans aren’t allowed to scroll through a salad recipe without being bombarded with 20 ads?


A human could read the llms.txt if they want to. And a developer could put ads in llms.txt if they wanted to!


> Super exciting that OpenAI pushed the compute out this far

it's even more exciting than that. the fact that you even can use more compute to get more intelligence is a breakthrough. if they spent even more on inference, would they get even better scores on arc agi?


> the fact that you even can use more compute to get more intelligence is a breakthrough.

I'm not so sure—what they're doing by just throwing more tokens at it is similar to "solving" the traveling salesman problem by just throwing tons of compute into a breadth first search. Sure, you can get better and better answers the more compute you throw at it (with diminishing returns), but is that really that surprising to anyone who's been following tree of thought models?

All it really seems to tell us is that the type of model that OpenAI has available is capable of solving many of the types of problems that ARC-AGI-PUB has set up given enough compute time. It says nothing about "intelligence" as the concept exists in most people's heads—it just means that a certain very artificial (and intentionally easy for humans) class of problem that wasn't computable is now computable if you're willing to pay an enormous sum to do it. A breakthrough of sorts, sure, but not a surprising one given what we've seen already.


An algorithm designed for translating between human languages has now been shown to generalize to solving visual IQ test puzzles, without much modification.

Yes, I find that surprising.


Maybe it's not linear spend.


I am eager to learn the pricing as well. It works sooo well but the pricing will make or break whether it's viable for apps


the ability to invert a function sounds crazy to me. I never imagined that was possible


training an ai model is essentially searching for parameters that can make a function really accurate at making predictions. in the case of LLMs, they predict text.


Github already has a way to get the raw text files


All of them in one operation? How?


I think he is confusing "plain" or "raw" view, so probably not all of them.


Researchers want to publish

Recruiting


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: