Gave it a try a couple of years ago and it had issues with larger JavaScript bundles (passing a lot of objects on the stack requiring recompiling the entire process with an increased stack size or modifying the threads on which it runs to accommodate - even with that, reasonably deep callstacks would overflow).
I also thought the deterministic GC would mean lower memory usage but it's actually worse than V8 (which has all the intermediate bytecode, and the JIT-ed machine code as well in memory, plus the dangling objects).
Still impressed that someone could write an EcmaScript-compliant engine from scratch, but looks more like a proof of concept (the fact it didn't get any updates in a year doesn't help either).
Well, roughly-HTML, almost-CSS and fairly-but-not-completely-JS. Historically, TIScript was the biggest problem for web compatibility, being very web-incompatible, but even the QuickJS version is already incompatible with the web in a couple of ways. And its fork of CSS doesn’t implement things like Flex and Grid, but rather something proprietary with similar scope, some advantages, some disadvantages, but mostly just different which I would say is generally a bad thing.
which is fine. The goal is to make *cross-platform* Desktop GUIs that is also light weight. I think Sciter can define a subset of html|css|js and provide some syntax checker(e.g. this feature is not supported). For Desktop GUIs we don't need all those web APIs etc. Sciter is super light weight comparing to Electron and Tauri.
You very regularly would like cross-platform desktop apps to work on the web as well, or at least to be able to share significant chunks of code. Given that, the fact that Sciter isn’t actually HTML/CSS/JS but rather a superset of a subset of each becomes very notable: you certainly won’t get total code sharing, and you’ll probably even struggle to get significant UI sharing.
I think sciter is not electronjs per se, it does not do web rendering, instead it uses html/css/js for Desktop(no internet) development, to replace say Qt-based GUIs.
As I recall, the only major advantage of QuickJS is that it can run on machines where v8 couldn't dream of fitting.
Data sizes almost always exceed the size of the code itself. I wouldn't consider it surprising that the IR would fade away into the noise.
V8 can also make tradeoffs that QuickJS probably cannot. For example, v8 internally represents strings as 1-byte latin-1 characters rather than 2-byte UCS-2. This halves memory consumption, but adds a lot of code to the implementation that a microcontroller running QuickJS may not have room for.
It’s really interesting to hear your performance issues with QuickJS there.
I found an opposite sort of performance profile (few small values on the stack, shallow stack size) with lots of calls back and forth between C and JS worked really well. Or, a lot of situations outdid V8 and ChakraCore for me (with JIT disabled). I didn’t dig into the exact reasons why though.
It's possibly related to the size of the JavaScript bundle being used. In my case it was 4Mb of minified JavaScript being evaluated (React Native with a bunch of dependencies).
Im am always fascinated by how a single person can build a language runtime. That too for language like JavaScript where large number of features are introduces on regular basis.
JS is relatively easy to implement if you don't care about performance. (QuickJS does care about preformance though, which makes it more amazing.) The apparent complexity of ECMAScript specification mostly comes from lots of pseudo code written in prose, which leaves no room for different interpretation and actually makes implementor's job easier. Newer JS features also rarely introduce new concepts and can be mostly desugared into the core language, pretty isomorphic to ES5 plus some library bits like Proxy---which means you can start with ES5 (an even smaller language) and build newer features on top of that.
Would you call running a marathon easy? Anyone can run. But could you run for 26.2 miles without stopping? The ECMAScript 12 spec is 44,000 lines and it contains 269,000 words. It's enormous. It's the same way with qemu. You have to spend months doing an enormous number of things to build something like that before it's able to run even the simplest most everyday programs. Bellard is legendary for his stamina. Not many programmers can match it. Only one I can think of would probably be the SerenityOS guy.
I like your marathon analogy. Not everyone can or does run a marathon, but there are still lots of marathoners. You do need stamina, but you don't need an inhuman level of stamina like Bellard's. If you don't stress yourself and pick your enemy wisely (like, start with ES5) it is very much doable.
Note that the GH repo is just a mirror, development happens elsewhere. From the copyright notes there is another main developer involved, Charlie Gordon.
One great feature of QJS is it can create C executables or modules from JS.
AFAIK the JS code is not converted to C, but the compiler produces bytecode which is bundled with C code.
This is interesting because it means it's possible to execute JS code from languages that have C interop. So for example you can do SSR rendering of Svelte components in a Go server application. At least in theory, I've never tried this.
You can use QuickJS as an Actually Portable Executable.
$ echo 'console.log("hello world\n");' >hello.js
$ zip o//third_party/quickjs/qjs.com hello.js
adding: hello.js (stored 0%)
$ o//third_party/quickjs/qjs.com /zip/hello.js
hello world
Just add your source code to the qjs.com executable using InfoZIP and you've got a single file releasable binary that'll run on seven operating systems. It's also about 700kb in size.
If you want to use QuickJS in the browser and/or NodeJS for running untrusted code, building plugin systems, etc, I have a library here that wraps a WASM build of QuickJS:
QuickJS is awesome, simple to use and very easy to integrate.
Over the past few years I slowly built a small JS runtime using QuickJS as the engine and libuv as the platform layer, amongst other things, in case anyone wants to take a look: https://github.com/saghul/txiki.js
QuickJS is awesome. I’m using it to allow you to execute javascript in a markdown-like file format that allows you to manipulate the document tree programmatically before rendering to html. Has turned out much nicer than CPython which I was using before.
Project seems really cool! The docs seem a little sparse, unfortunately. After reading through the calculator code I am still curious: how do you use a custom resolver for modules? I want to make something completely sandboxed.
I wonder if it's a "raised barrier to entry" to prevent low-effort drive-by PRs. I have a couple of extremely-small OSS libs that are available in public package repos and are hosted on Github, and even those have gotten the drive-by "I slightly reworded your README" PRs that don't help. I can't imagine the volume of noise that larger projects get.
By requiring a .patch file to be sent via a mailing list, the sort of "burden of effort" shifts away from "log into Github and click the pencil icon in the preview for README.md" and towards a slightly-heavier-weight minimum threshold -- one that shouldn't be so bad as to discourage folks making earnest valuable changes, but might be enough to make low-effort spam more annoying for the spammer than the spam-ee.
Barrier of entry is a good call. From my experience on some rather large OSS projects, the ratio of noise to signal on PRs tends to go along with the target audience of the project.
Using a mailing list to manage (incl. browse) contributions really sucks. The way that GitHub abuses pull requests and all the anti-productive derpiness that has arisen around it, though, sucks even more. It will be nice when the world gets shaken out of the collective hallucination that GitHub is at all a reasonable and efficient way to do things, instead of what it actually is: another dark-pattern-employing, walled garden social network that prioritizes engagement and growth over productivity, privacy, accessibility, and quality of interaction.
The rise of GitHub seems to me one of the strangest things in the world. I've heard it called "the front page of the FOSS movement"! But the code for GitHub is not available on GitHub! It was weird enough when a closed-source proprietary company put themselves in front of an open-source distributed VCS and got traction, but then they were acquired by Microsoft!? That's like Sauron buying the Shire and charging the hobbits rent on their own land or something. I feel a rant coming on, so I'm going to check myself here.
I think if the world ends, it won't be due to Fire or Ice. I think the name of doom is "Convenience".
I'll continue your rant. Bruce Perens and Eric Raymond fucked us. They took freedom and made it a commodity palatable to corporations. Now, Microsoft, via Github Copilot, has plans to sell our own code back to us as a subscription-based B2B productivity tool for businesses to create proprietary code. I couldn't even imagine such a dystopian outcome in 1998.
Some proprietary software authors would likely feel the same way about free software. For example, Phil Katz drank himself to death after zlib used his invention to create the most prolific library in the world. I'd also imagine all the jobs software is eating would feel the same way. Now software is eating itself.
I thought it would happen much faster. Programming isn't that much harder than Go (the game not the lang) is it? Once automated programming becomes common and useful the problem domain shifts to mapping human intention(s) to the automated machinery, or how to write tame AI that can re-write itself w/o going feral? (Personally, I don't think it's possible, and Isaac Asimov pointed out why. It's the same "strange loop" explored in a handful of sci-fi stories that have hyper-lucky characters. What, ultimately, is good?)
In re: Free software vs. charging for copies of software, I looked at the world situation when I was younger and figured that no company or even nation could pay me what I'm worth (in terms of the value that my ability with computers could potentially unlock) and the most efficient way to "work" would be to give away my output and take advantage of the "rising tide that lifts all boats". I took Bucky Fuller very seriously, and still do: we have all the technology to make a secular utopia, it's just a matter of logistics and psychology. And the computers can handle the logistics easily. I figured the psychology would handle itself once word got around, but it hasn't really. (I don't know if I was just incredibly naive or if humans are just incredibly stupid, it's a problem that still vexes me.)
How could nations pay us what we're worth? The pay schedule for coders already starts at Major General. A few promos and we're paid more than the President. Still can't afford to buy a house though.
Well, I mean, government contracts can be pretty lucrative, eh?
But I'm not being quite so literal. Imagine, say, all the value saved/created by the BitTorrent protocol, there's no way (that I can think of) for Bram Cohen to garner even a vanishing fraction of that value.
There's no point if you think bittorrent's impact has been creating value rather than giving people the means to acquire value without giving. An economy without exchange is not an economy. Please do not interpret this as me saying economies are a good thing. The system is biased and new technologies put power into the hands of smart individuals springing up around the world.
> There's no point if you think bittorrent's impact has been creating value rather than giving people the means to acquire value without giving.
Ah, sorry, I should have been more clear above. When I speak of the value that BitTorrent created I don't mean piracy, I mean all the saved bandwidth and resources from having a more efficient protocol for transferring large files (for legitimate uses.) Maybe it was a bad example, instead think of John Cristy and ImageMagick, or Linus and Linux, the idea is that one contribution by one talented and skilled programmer can have huge, open-ended value, but that it's very hard to actually get paid even a minuscule fraction of that value.
To me, back in the day, in my fiery youth, it seemed much easier to reprogram economics itself by developing and releasing technology (Bucky's "Design Science Revolution") than to try to go the Jobs/Gates route and cash in. A quarter of a century or so later, it seems to me that the Free Software movement has pretty much failed, and I should have paid more attention when Etsy was handing out stock options...
> Please do not interpret this as me saying economies are a good thing.
It beats banging rocks together, but yeah. I'm in the "It was a bad idea to come down out of the trees." camp most days.
> The system is biased and new technologies put power into the hands of smart individuals springing up around the world.
We do live in interesting times, eh? I should take this opportunity to tell you that it's great fun watching you become a legend in your own time. Cheers! and well met.
I think people like Cristy and Torvalds just want to help out. It's fiercely competitive giving things away for free in our gift economy. Some devs work hard for years building open source software without ever knowing the satisfaction that people are using it.
> I should take this opportunity to tell you that it's great fun watching you become a legend in your own time. Cheers! and well met.
The pull request workflow is not "anti-productive derpiness". You literally click one button and the pull request is merged. With email you have to save the patch from your webmail client† somewhere, then apply it on the command line, and then push your changes. This workflow inefficiency is multiplied by the number of patches you have and therefore adds up quickly. There is no good reason for it other than nostalgia for the old days of Unix when everything was email based because the Web wasn't very mature yet.
†Yes, most people use webmail, not native desktop email clients, and no, it is not reasonable to expect people to switch for this.
You're arguing the old argument CLI/Terminal vs. GUI. Clicking a button vs. typing some commands is not the productivity bottleneck (or if it is you have bigger problems, eh?)
(I've seen terminal jockeys do things that blew my mind, emitting continuous streams of characters that seamlessly switched between terminals and functions, orchestrating their machine like some virtuoso pianist, moving so fast it was impossible to follow. GUIs offer such folk nothing and I wouldn't be surprised if Fabrice Bellard was of this ilk.)
As a contributor i found sending (or attaching to a bug report) .patch files much easier than the pull request-based workflow, especially when there are changes after the patch is made.
You have your thumb on the scale and are subjecting GitHub to the more lenient half of a double standard. Just like lots of software that gets billed as working with "a single click" even though it doesn't[1], there's lots more to it than you're saying here. Go do a sober and objective measurement of how involved GitHub actually is, including scenarios for "I don't have an account yet", "I have an account, but I haven't forked/changed/committed my contributions yet", "I have cloned the original repo to my computer and committed the last few days of my work and need to submit my changes now", and "I need to maintain my privacy from casual snooping and to protect myself from people who would be casually willing to hurt me"[2].
And I'm continually astounded whenever I bring up any GitHub criticism and get in response folks so hurried to try and "but Hillary!" me while leaping to the conclusion that I like mailing lists—even here in this case where I deliberately took special effort to start off by outright saying they suck. Let me be emphatic (again): mailing lists _really_ suck.
> This workflow inefficiency is multiplied by the number of patches you have
On the topic of things that suck: Git's CLI. Having said that, I'm not going to lie about it being worse than it is. You've moved your thumb off the scale. You're now saying things that aren't true.
Git's native workflows should be better. So should GitHub's.
> There is no good reason for it other than nostalgia for the old days of Unix
Go nerd-strawman someone else. I'm not Drew DeVault. I'm not the easily-take-downable bogeyman that, for your own convenience, you wish you were engaging with. I'm saying GitHub sucks because it does.
(As further data points: it's been only since, like, sometime after COVID lockdowns began that GitHub's site was finally fixed so that when you visit on mobile now, you don't get a horizontal scrollbar anymore. And this is the pre-eminent, tech-oriented platform that people can't stand to see being heralded as anything other than the thing that makes doing technical work feel like normal and approachable for normal people? Their "wikis" aren't even wikis, for fuck's sake.)
It's how git was created to operate. The linux kernel, probably the largest software project in history, only operates through emailed patch files too.
Linux uses pull requests, too, it's just that it uses them the way they were meant to be used—among a limited cohort of collaborators frequently pulling from one another—instead of GitHub-style pull requests that have little to do with Git-the-tool and that everyone gets shunted through, even when a patch (or three) would be more appropriate.
Code size is not what the other commenter was referring to when they wrote "largest". There are of course other projects with more LOC than the Linux kernel.
Stop obsessing about the date of items and judge them on their actual worth and usefulness, not assumed "freshness".
As I have mentioned elsewhere, activity is not a proxy for quality, but many other factors are, including the author, whose name I included in the submission but for some reason has been removed.
I'm with you w.r.t. people on Jan 1 2022 trying to tag things as "(2021)", but can definitely understand the value of tagging as (2020/2019/...and prior).
I also thought the deterministic GC would mean lower memory usage but it's actually worse than V8 (which has all the intermediate bytecode, and the JIT-ed machine code as well in memory, plus the dangling objects).
Still impressed that someone could write an EcmaScript-compliant engine from scratch, but looks more like a proof of concept (the fact it didn't get any updates in a year doesn't help either).