Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
CLUI: Building a Graphical Command Line (2020) (replit.com)
178 points by ingve on May 5, 2021 | hide | past | favorite | 79 comments



Discussion in March last year: https://news.ycombinator.com/item?id=22498665


I think there is a great potential for highly interactive predictive suggestions like shown under section 2 "discoverability" in the wider Unix ecosystem, however discoverability is the key issue, and a massive problem space. Command line programs are black boxes (they could be pure bash, a compiled C++ binary, an npm package, etc), which means there is no programmatic standard for extracting the command set.

The most backwards compatible format to achieving this would be a interface like abstraction on top of regular binaries (similar to the concept of TS interfaces adding type completion to JS) which describe a list of parameters/flags and their valid values/data types with an associated description. I think a system similar to this gives an intuitive and extensible suggestion system without requiring inconceivable redesigns to existing systems whilst maintaining the familiarity and ease of use of the regular command line.


The TS/JS analogy is perfect. This is exactly the approach we took with Fig.

We defined a declarative standard that lets you describe the structure of any CLI tool. Fig then uses these to provide VSCode-style autocomplete in the terminal!

Currently specs are manually contributed by the community, but we are planning to integrate with CLI libraries (cobra, commander, click, etc) so they can be generated automatically!

You can see some examples here: https://github.com/withfig/autocomplete


Fig seems to be reinventing the wheel. Shells already have their own completion systems to handle exactly this problem, and you can change their default frontend to something like fzf. (See, e.g., fzf-tab.)


While they exist, shell based autocomplete have always felt like a suboptimal solution we just accept out of historical happenstance.

- You need to press tab to get any completion to display

- You don't know what completion will appear when you press tab, or at most one completion displays

- The completions are not at all aware of context, at best just doing some untyped fuzzy matching. Compare this to Fig, which at least seems to know what sorts of arguments each command accepts and displays a list of them

- They only provide completion, not inline documentation, which is one of the big points of the article


None of these are true.

- You can get the completions to display as you type (see zsh-autosuggestions, zsh-autocomplete).

- Completions are completely aware of their context, probably more so than what fig can infer. You do need to actually load the completion functions of your binaries though, which are traditionally named ‘_command’ (so, an example would be ‘_git’).

- They can provide documentation; fzf-tab does so.


Ouch. I suggest you learn to configure your autocomplete. Pretty much everything you say is incorrect except perhaps the last point. I've used context aware completion and with multiple options shown for over a decade!


Most of those things aren't true of fish:

- It autosuggests on the command line without typing TAB

- The completion is programmable, so git completion knows where a branch is expected or a path is expected, etc.

- When you hit TAB, you get descriptions of each completion


> You need to press tab to get any completion to display

You only need to do that if you want to bring focus to the completion selection. A lot of interfaces will display completions without the need to press tab. And to be honest, while I'm all for making things simpler, I don't think pressing tab is a hard barrier to expect people to overcome.

> You don't know what completion will appear when you press tab, or at most one completion displays

This doesn't make a whole lot of sense. If you know what completion is going to appear then you're hitting tab to save keystrokes. If you don't know the command then of course you're not going to know what completions will display since the whole point of hitting tab is to explore the valid options.

> The completions are not at all aware of context...

That's not even remotely true of Bash, let alone any modern shells like murex and fish.

Murex even goes one step further and doesn't just display the parameters in context of the command that's being run (eg suggesting names of available branches when running `git checkout <tab>`) but it runs the entire command line that precedes it to understand the data being passed into the STDIN of the current command. This is useful when using tools that inspect JSON properties for example:

    murex-dev»  open https://api.github.com/repos/lmorg/murex/issues | [[  ]]                 
    (builtin) Outputs an element from a nested structure                                                                                      
     /0                                /0/active_lock_reason             /0/assignee                       /0/assignees                     
     /0/author_association             /0/body                           /0/closed_at                      /0/comments                      
     /0/comments_url                   /0/created_at                     /0/events_url                     /0/html_url                      
     /0/id                             /0/labels                         /0/labels/0                       /0/labels/0/color                
     /0/labels/0/default               /0/labels/0/description           /0/labels/0/id                    /0/labels/0/name                 
     /0/labels/0/node_id               /0/labels/0/url                   /0/labels_url                     /0/locked  
> They only provide completion, not inline documentation, which is one of the big points of the article

Nope. In murex if you type `kill <tab>` you'll get a list of process names instead of PIDs and when you select one it is still the PID that is placed.

    murex-dev» kill
    (/bin/kill) kill - terminate or signal a process
     543     /Applications/Visual Studio Code.app/Contents/Frameworks/Code Helper (Renderer).app/Contents/MacOS/Code...
     738     /usr/libexec/promotedcontentd
     1645    /System/Library/PrivateFrameworks/DifferentialPrivacy.framework/XPCServices/DPSubmissionService.xpc/Con...
     17903   /Applications/Slack.app/Contents/Frameworks/Electron Framework.framework/Helpers/chrome_crashpad_handle...
     47968   /System/Library/Frameworks/Metal.framework/Versions/A/XPCServices/MTLCompilerService.xpc/Contents/MacOS...
     496     /Applications/iTerm.app/Contents/XPCServices/pidinfo.xpc/Contents/MacOS/pidinfo

Likewise if you type `git <tab>` you will get a list of all the next commands that follow and what they do:

    murex-dev» git
    (/usr/bin/git) git - the stupid content tracker
     init           Create an empty Git repository or reinitialize an existing one
     restore        Restore working tree files
     revert         Revert some existing commits
     submodule      Initialize, update or inspect submodules
     push           Update remote refs along with associated objects
The output is colourised and highlighted so it makes more sense in a terminal than it might appear in here. But you get the idea. And all of the suggestions are scrollable with the cursor keys, you can quickly jump by typing more characters, or search for specific completions using regex if you press ctrl+f (this last feature also makes it very quick to traverse large directory structures in `cd`)


Yeah, we don't want this to be a '15th standard' situation [0]

The biggest issue with traditional shell completions is that they are kind of tricky to build. Everything is defined imperatively and written as a shell script.

We wanted to lower the barrier to building completions and make a representation that works across different shells.

[0] https://xkcd.com/927/


How far have you gotten in making it work across different shells? I'd be interested in adding something to https://www.oilshell.org/, and have been thinking about this for awhile (see sibling comment in this thread)

A big problem I came across is that any shell-agnostic solution is likely to be hard/impossible to implement in bash, which is the most popular shell :)

Since Oil already emulates bash, it was easier just to implement bash APIs and get a huge corpus of completions for free, rather than try to develop a new corpus.

It looks like Fig does things in the terminal emulator / OS and not the shell, which is interesting. Does an approach like that work on Linux?

Another problem I ran into is that GNU readline is pretty limited as far as the UI goes. Fish and other shells have a better UIs but they are coupled pretty tightly to the shell.


First off, I've been a big fan of Oil shell for a while! Love the blog. :)

Fig currently works with bash, zsh and fish. Since we are operating at the OS level rather than in the shell, we can get around the inflexibility of readline/bash.

In theory, this approach would work anywhere. What we do currently on macOS is especially 'involved' since we need to provide the completions UI in a separate GUI app that we don't control. But, in general, integrating at the OS/application level should be possible on Linux and Window as well.


Great, yes doing it at the OS level is a creative approach and I can definitely see advantages to that. I would be worried about having to parse zsh and fish as well as bash, although I suppose coming up with something that works most of the time is feasible.

As mentioned elsewhere I wanted to have some invariants for correctness, but maybe not all of them were necessary. I felt it was useful to separate the problem into completing the shell language vs. completing the argv.

Feel free to join the Oil's Zulip channel (link on home page https://www.oilshell.org/)! There are the past discussions on #shell-autocompletion, going back a couple years. It's dormant now, but as mentioned, there were multiple people who wrote code towards this. And there were debates around the issues above.

Another interesting channel that just started is #shell-gui. Short summary which I have yet to blog about: I just implemented "headless mode" for Oil (analogy to headless Chrome). So I can punt the UI for the shell to multiple other projects :) As I mentioned a few times, I realized the scope of the project is too big. I have a collaborator Subhav who has a prototype of a GUI in Go (and shell).

Basically the shell a language- and text-oriented interface, but it does NOT have to be terminal-oriented interface. It took me awhile to realize that we shouldn't conflate those things! And it was pretty easy to tease them apart in Oil. The headless mode provides a simple interface and allows integration that can't be done with bash (or any other shell AFAIK).

It needs feedback from people who want to build GUIs. An easy analogy is to imagine is a browser-like GUI for a shell with the URL bar as the prompt. The URL bar provides autocompletion, history, and shows state, etc. just like shell does.


Just joined! Also love the idea of a headless shell. Excited to chat more


Just the other day I added completion support for our cli build with cobra and it was <10m work.

So I’m not convinced this is still a hard thing to do. (The ui of fig does look slick though.)


Glad you like the UI!

Cobra and other CLI libraries, like oclif, can help you generate the skeleton of the CLI, stuff like subcommands, options, etc. (I'm in the process of writing the integration so you can generate a Fig completion spec the same way!)

The difference is that with Fig you can add richer completions as well. For instance, Fig's completion for `npm install` allows you to search across all npm packages: https://twitter.com/fig/status/1385401292193865731


Cobra has also support for completion. Through defining a ValidArgsFunction. This seems the same level as the completion shown for npm install.


Woah, that's really cool. This seems to check the first two of the article's points, approachability and discoverability. Are there any plans to implement the third point, interactivity, where you have small, dynamic guis embedded in the terminal?


The core focus right now is autocomplete, but this is definitely an area that we want to explore more in the future!


> Command line programs are black boxes (they could be pure bash, a compiled C++ binary, an npm package, etc), which means there is no programmatic standard for extracting the command set.

You say that but it's actually an easier problem to solve than it first appears. Both my shell, murex[0], and Fish[1] parse man pages to provide automatic autocompletions for flags.

I also have a secondary parser for going through `--help` (et al) output for programs that don't have a man page. This isn't something that is enabled by default (because of the risk of unintended side effects) but it does work very effectively.

They way I look at it though, is you want to cover as many things automatically as you can but there will always be instances where you might want to manually fine tune the completions. For that, I've also defined a vastly simplified autocompletion description format. It's basically a JSON schema that covers 99% use cases out of the box and supports shell code to be included to cover the last 1% of edge cases.

This means you can write autocompletions to cover common usecases and have the shell automatically fill in any gaps you might have missed.

[0] https://github.com/lmorg/murex

[1] https://fishshell.com


You should publish some docs about that! Maybe other tools can pick it up.

A couple years ago I wanted to tackle this problem for Oil, and there was some brainstorming here:

https://github.com/oilshell/oil/wiki/Shell-Autocompletion

https://github.com/oilshell/oil/wiki/Shellac-Protocol-Propos...

and on the oilshell Zulip in #shell-autocompletion.

The author of the Ion shell went pretty far with it, but it seems to be dormant now (or correct me if I'm wrong):

https://www.redox-os.org/news/rsoc-ion-ux-2/

https://www.redox-os.org/news/rsoc-ion-ux-3/

https://www.redox-os.org/news/rsoc-ion-ux-4-5/

To me a problem is that most shells like bash and even zsh are too impoverished to implement some kind of common protocol. For one, they don't work well with structured data.

I just looked at Fig, and it seems interesting as well. The approach of using Mac-specific APIs and doing it outside of the shell skirts that problem.

I did want to make Shellac correct and precise. Specifically it should have the invariant that if the tool suggests a completion, it should actually be a prefix to a valid command! And it was also supposed to separate the problem of completing the shell language from completing the argv language -- those are two different things. For example shells complete variables like $P -> $PATH, etc. And they complete their own builtins.


There's docs on the autocomplete schema but I'd wager that will be very much a NIH thing because it's an easy problem to solve but one that nobody seems to agree on. Thus everyone will look to solve it themselves. And I can't really complain about that because I did just the same myself -- rather than looking for an existing schema I decided to define one myself.

The docs on my schema is https://murex.rocks/docs/commands/autocomplete.html but it was a little rushed so could use more examples and a little more TLC.

Also this XKCD strip feels apt: https://xkcd.com/927/


OK thanks for the info. Well I'd say coming up with an approximate solution for one shell is easy -- but that's the state of the art. That's what bash, zsh, and fish do. (Not they are the same -- I'd like something closer to fish, not bash.)

The goal was to improve on the state of the art, which is hard. The original motivation for Shellac is here and I discussed it with many people. The Ion shell author wrote some code toward it, and so did a zsh dev.

https://github.com/oilshell/oil/wiki/Shellac-Protocol-Propos...

But it didn't really go anywhere for a variety of reasons. One is the "bootstrapping" problem. That is, there's no incentive for anyone to implement shellac if there's no corpus of completions. Likewise there's no incentive for the author of new-popular-tool to use it if no shells implement it.

The other problem is whether the completion is declarative or imperative, which you hinted at. If it's more imperative then you've automatically coupled it to a language like Murex. If not then I think the UI will be limited and it will break some of the invariants I wanted (but are maybe not strictly necessary).

I think that most people on the #shell-autocompletion Zulip wanted it to be declarative. I was one of the few people who said that declarative isn't enough, mainly because the ACTUAL command lines are written in imperative languages. It's sort of an argument around computational power.

To me the fact that all the popular systems are imperative is evidence in favor of that, until reality shows the opposite :)

I also wanted to push the burden of developing the logic on the people who write the tools -- THEY know what its syntax is. This also solves versioning problems. So the idea was that EITHER the shell or the TOOL ITSELF would be a Shellac server. In the latter case, it would never get out of sync, which does happen in bash, zsh, and fish.

(Although to be fair if you dynamically scrape --help, that also solves the versioning problem, but not if you bake it in. bash-completion actually DOES the dynamic execution and scraping of 'ls --help' on TAB, which I didn't know before implementing Oil's autocompletion )


One of the things that we discovered when trying to build out a declarative standard is that it can never encompass everything, precisely because many CLI tools implement their parsing logic imperatively! Our solution was to start off with a declarative schema, that can be supplemented with an imperative escape hatch.


Yes I'd like to see how the imperative escape hatches work for Fig and Murex (and how many completions needs to use it)

Actually I just remembered I enumerated some "unusual cases" here:

https://github.com/oilshell/oil/wiki/Diversity-in-Command-Li...

Is the escape hatch for Fig in JavaScript? One thing I would like to see "rescued" from Shellac is the idea of allowing the escape hatch to be in any language, not Oil or Murex, or JS. In particular, it could be the language that the tool itself is written in!


I wish all of these initiatives luck, but for the basic Unix utilities they couldn't even agree on some sort of common format to allow for introspection (i.e. the tool itself would allow for the extraction of its parameter names, parameter types, etc.).

I somehow feel that all these boil-the-oceans approaches regarding rich CLIs aren't going to make it very far.

Which is really unfortunate :-(


You don’t need to cover every parameter and use case of every command. You just need to cover the most common use cases and give them priority over the more esoteric flags. So beginners get help but power users can still use the shell. I say “just” but obvious even that is still a big task.

As for covering every flag for every tool, some shells already parse man pages and —-help output to provide meaningful autocompletions automatically. Murex does this (https://murex.rocks) as does fish (https://fishshell.com)


zsh supports extracting options from the command's --help output and it even recognizes when FILE or DIR is specified as the argument for an option.

There is probably something like that for bash too.


While I appreciate any experiments in making CLI more approachable to the public, I do disagree with the author's assertion that CLI has "low" discoverability and interactivity. Many modern CLI environments have excellent discoverability and interactivity: shells such as PowerShell and zsh, and REPLs such as ipython all come to mind. Even bash can be customized to do fairly well.


Erm, compared to GUI interfaces, shells really do have very low discoverability. Someone really new to the shell could stare at their terminal emulator for months before they would even come up with "ls".

Yes, shells can be made more interactive, but that requires setup and prior-experience and know-how. How would someone new know to install zsh and something like ohmyzsh?


I like this because it's trying to be more than a classic terminal. And I hate it for the same reason, it repeats everything a classic terminal has, including the awkward syntax like a line that begins with ">" and parameters that are preceded by "--". Sure, it's second nature to us, but habit is a terrible way to tell what's right and what isn't.

We need to go back to first principles.


These are not features of the terminal at all. “>” is a customizable prompt of your shell, and “-“ is a convention used by most (but not all) binaries. It’s also a very useful convention; How else are you going to tell arguments apart from options?


You'd think the very purpose of calling it a "Graphical Command Line" would be that you're not restricted to a literal line of text.

Just because it should be convenient to type and navigate with a keyboard doesn't mean the command line can't have structure beyond just the characters on it.

I can fill in a webform just the keyboard. Tab, type, tab, type, tab, type, tab to the submit button, hit enter. Done.

So I didn't have to type "--" anywhere. I didn't have to type the field names at all actually. Not saying I want them fixed per se, but "telling them apart" shouldn't be a real question in a graphical terminal UI.

Likewise we should never escape strings or quote strings in a Graphical Terminal IMHO. They should just be text fields integrated seamlessly into the line, with visible boundaries.

If you copy the line and paste it in a text editor, then sure, maybe show all this ghastly encoding happening underneath. But in the GUI Terminal... nah I don't accept it.

EDIT: While we're at it, I don't think we should use monospace fonts in a GUI terminal. Certainly a font with distinct letters like lI1 and 0O, but not monospaced.


The animations in this post remind me of nothing more than a Symbolics LISP Machine: https://www.youtube.com/watch?v=o4-YnLpLgtk

which also had semantically driven clickable data structures one could use to build up a command line invocation. In about 1982.


We were inspired by this idea while building early prototypes of Fig (https://withfig.com) and wanted to bring rich interactivity/discoverability to existing terminals.

The goal is to layer on a CLUI style interface to all of the CLI commands you already use!


Powershell solved all this, Intellisense, introspection, apis for compiled apps (although I think .NET is only), typed output.

Still hasn't caught fire though, even for me. I respect it but I still reach for bash. I'm not really sure why except perhaps momentum.


Not sure how much of this still applies, but I remember reading this a while back when I also realized PS just didn't click for me

https://medium.com/@octskyward/the-woes-of-powershell-8737e5...


Reading that link, even back when it was written it feels like an article of first impressions with the bias of someone comfortable with many hours of use of a Unix shell.

1. Powershell isn't a complete greenfield and exists in a context, which explains many of the complaints.

2. any new tech use requires learning.

I could address each point but most galling was the conclusion:

"At this point we realise the truth — PowerShell is not designed to be an actual shell for users" Because wget did not output to a file by default?!? Really?!?

IMO, this article failed to hit the mark back in 2014 when it was written let alone now.

Edit: conciseness



Got about a quarter of the way through and none of it applied to modern Powershell.


There are advantages to text. Some of my favourites:

1. Humans can understand text too!

2. Text can travel over every transport.

3. Text is benign.

4. Large text is large and small text is small, but all objects look the same size.


You still have text. Everything can still be parsed.


No. Even in Powershell type fidelity and other data loss is inevitable when objects are serialised (what most people call "parsed").


Great to see the progress on this. It seems my fears have been assuaged and the CLUI keeps all the power of a command prompt while adding discoverability. I still think the average user will stop reading the drop-downs after awhile and just treat it like a normal command prompt once they've memorized everything, but up until that point this seems like a better way to get newbies to learn than telling them to read the man pages. That said, I do still think it's unfair to say that this is equally as approachable and a GUI for the average person. Certainly it's better than a standard CLI, but it's more of an in-between I think.


I think Cisco's context-sensitive CLI really nails this problem. If you've never used it before, pressing "?" at any point in your command will show you a big list of all the possible inputs, and a brief description of what they do. Here's a video of it: https://www.youtube.com/watch?v=Mb7k5eo3lJw


"Terminals, the primary platform for command-lines, are intimidating and feel like a black box to non-technical people."

That's learned helplessness.

My mother, who is a distinctly non-technical person, learned to use IRC when she was 60. Why? Because her friends were already using it. She wrote down a series of commands that her friends sent her in email, carried them out, and then spent years happily chatting with them.

Humans are very good at learning complex things when they see something they want at the end of it.


Is she friends with a crew of 90s hackers, or are there just some elderly ladies who recognise the brilliance of IRC that's missed by most modern techies?


There are in fact (or were, I haven't checked in on this in a few years) a bunch of middle-aged to elderly ladies who keep in touch via IRC.


There are countless stories like this and over the years I have witnessed many "non-technical" people using computers wihtout "GUIs". As in the parent comment, I observed they often jot down the commands and that's the end of it. Time at the computer used to be more limited. People take the most direct route to getting what they want done, then it's back to real life. Other than nerds, no one wants to stare at screens 24/7.

Non-technical people will use whatever they are being asked to use. If the "CLI" was popular amongst software authors (as it once was in the 1980s), people would use it and get things done the same way they do with "GUIs". I saw this first hand. Computers for many folks are just a means to an end.

The CLI if it were the chosen standard might make today's "tech" company strategies more difficult. How would ads be delivered. How would "dark patterns" avoid discovery. How would vendors justify "upgrades" without resource-hungry GUIs that slow down as they bloat.


To add to that, many non-technical people were perfectly fine using DOS when that was all they had on a computer. Yes there was a learning curve to it and people had notepads with commands written down. Yes it's fine for things to require education to be useful. No it's not fine to dumb everything down and take away the ability to look under the hood from everyone just because some subset of people isn't very amenable to education.


Terminals specifically feel like a black boxes even to technical people. And that is even after reading things like https://www.linusakesson.net/programming/tty/

Terminals are something I find lot of people take for granted, and don't really stop to think how they actually work. Very much a black box.


IRC can go a long way to modernizing the experience without changing the underlying nature of the "terminal".

I mean what is a terminal. It's chat with a bot we call "the terminal". We can look at chat apps and see how we can integrate graphics and commands into the chat without it looking like obscure 80s thing.


I love this piece, but this part is hilariously wrong:

> Command line interfaces. Once that was all we had. Then they disappeared, replaced by what we thought was a great advance: GUIs.

The CLI is still incredibly relevant, it’s core to this day to single most important task you can perform with a computer: Programming. There’s only a few applications that can safely be considered more important: the browser, the spreadsheet, the word processor, maybe the email client. After that you start to get to things similar in relevance to the terminal. That’s not “disappearing”, that’s having enduring importance that makes it a monumental achievement deserving of a place in the software pantheon. The CLI only gets punished because it was once even more important, before the GUI. We should all be striving to create software with the enduring importance of the terminal.


Sorry but what programmers use is not relevant here.

The trick is to bring it to mainstream audience in a way that's not awkward.

The closest thing I've seen to mainstream terminal is the JavaScript console in browsers but... that's again for programmers (albeit also casual ones).

In professional apps like premiere, the closest thing to a terminal is a search box that's smarter than just looking up words in the documentation. And that is preeeeeeetty shit.

We can do better.


Applications like AutoCAD have long had a terminal window where you can perform one-off scripting, introspection of state, and so on. Blender has something similar with a built-in Python REPL:

https://docs.blender.org/manual/en/latest/editors/python_con...


They do, yeah, good catch there. I gotta say 3D modeling/rendering/animation is pretty intertwined with scripting/programming these days.

But I still don't find this very mainstream. It's a bit like the JS console in browsers.


The primary bottleneck is monetization, isn’t it? CLI workflows are predominantly FOSS and free, so no one has much of an incentive to do the streamlining and marketing that mainstream products need.

Some of the stuff that is immediately useful to mainstream users:

- ncdu

- youtube-dl

- spotdl

- aria2

- Homebrew Casks

People aren’t dumb. If these were marketed properly to them, they would use them.


I think there's plenty of examples of command driven text interfaces in popular products. But they're just delivered in a way that either functionality is intentionally limited, or where the order of the commands are interchangeable. For example

+ Search engine search bars

+ voice services like Siri and Alexa

+ Excel formulas

+ Slack and MS Teams use of IRC-like /commands in the chat window


> Sorry but what programmers use is not relevant here.

Why do you say that? This article is explicitly talking about programming tools, not mainstream tools. "At Repl.it, where our goal is to build a simple yet powerful programming environment"


Not to be critical but this doesn't seem to be much different than a drop down menu.

The implementation seems to be missing the point of the cli, scripting. If there is not a primitive way to combine and chain commands and reduce repetition, then this doesn't look like enough by itself to sway a userbase.


Take something like a log file

Mangle it with sed, grep, awk, cut, sort, perl, wc, whatever

Get the result you want

Dump the instructions in a script

Optionally stick echo -e "Content-Type: text/plain\n\n" at the top of the script, chmod 755 and host

Beautiful, job done, problem solved, move on.


So all they need to do is add a pipe operator, and we're in business...


I'm wrapping my head around what's the purpose of this.


One of the arguments in the article against GUIs is GUI density i.e. you have a lot of features that it becomes difficult to discover them in UI. I believe this is a valid issue but I don't see how CLUI is better at solving this problem. CLUI is employing search to figure out what you might be trying to do when you type a command. This works if your seed command (the command you typed in) makes sense to the search engine. This same search technique is employed by GUIs which have lots of features i.e. you type in what you want to do in the help box and it tries to understand your intent and returns documentation explaining the exact steps to take.

At the end of the day, I think GUIs & CLIs are targeted for different people and folks should pick which works for them.


Feels bit overselling as something radically novel, when PowerShell has most of the stuff also even if slightly different form. You have both advanced autocomplete (based on introspection), automatic form building (in ISE), and prompting of missing parameters in PowerShell.

Still, this replit "clui" might be pretty good execution of the idea, so I'm not dismissing it completely. Might be interesting experiment to plug it into powershell backend to see how well it works with more real-world stuff.


Not just PowerShell, but also zsh and even (tricked-out) bash to some extent. Not to mention REPLs such as ipython and others.


Great to see this on HN again, since we posted this article we started building CLUI into our product with great success.

I posted some videos on Twitter on how we use it in our app. Basically any place where you can image a lot of UI bloat, it's replaced by a CLUI bar: https://twitter.com/amasad/status/1390005374741143554


I'm glad you guys are still doing stuff with this, I thought the demo was really promising. Even though it's specific to your thing it seems like a good test case for how this can best be applied in the general case. Putting this in something like extraterm would be HN's nightmare, but I would be very interested to see whether there's a future in a cli base that actively takes advantage of the fact that it's not emulating 40 year old hardware.


Would you good people know about any efforts in this direction that are not so "top-down?" Seems to me that there a few "addons" to the classic terminal that could go a hugely long way.

Offhand, image-et-al preview still feels like a mess, a million ways of doing it and none are that good.

Also, I dream about something like "fzf, but with a very clearly delineated out-of-the-current-shell window,perhaps like rofi" or something?


fzf-tmux with the new tmux window popup?


This approach I think fails to address the real problem: I do not particularly care how I've reached 1/2/3/4 command. The problem comes when command 4 has many options. When you display those in a GUI with default values it is more or less easy for even uninitiated to get a grip. Not that much with CLI or this CLUI.

Best tools with complex logic like CAD software usually combine CLI, CLUI and GUI to be ergonomic.


The interactive ui shown is funny: the command allows you to type everything out except for the user name which is in a new box. This could have easily been on the same line. Something like a color or date picker would make more sense.


See the Oberon system for hints as to how to do it.


This is one thing that TempleOS, and before it things like OpenGenera, did well. You could type commands, but an interface backed the CLI, and you could interact with the output they generated, like a clickable listing of files.


I would dearly love to use a system that worked like these, and also modern stuff that I have to use, like web browsers and device drivers.

There's an awesome public domain fork of TempleOS called Shrine[0] that has networking and a normal shell. When is someone going to throw some chrome and polish on top of that and sell it as the OS of the future? I'll back your kickstarter or whatever.

[0] https://shrine.systems/


off-topic: went to comment on the article there, but replit comment section is already overtaken by spam :(

i guess that is a sign of success?


We have a comment section?


apparently you do :)

click "edit on replit" at the article and you go to:

https://replit.com/@util/replit-blog?fileName=posts/clui.md

it shows:

   > 70 Reactions
   > 47 comments


I think they're talking about the whole blog's Explore page.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: