thahxa's minddump

1.5M ratings
277k ratings

See, that’s what the app is perfect for.

Sounds perfect Wahhhh, I don’t wanna
selfmaderibcageman
centrally-unplanned

Okay I do not think this estimate is going to be correct in the end - no one knows right now, and while mainstream news sources do their homework on average this is exactly the kind of time where events outstrip thoroughness and sensational claims fill the void. But still, I am shocked that numbers like these could even be suggested - that is far, far higher casualty numbers than almost any recent protest suppression effort. Really speaks to the scale of the protest movement and the depth of commitment the loyalist factions in the Iranian government have to the cause (it is not trivial to get security forces to shoot their own people!)

Hopefully it just turns out to be speculation and the numbers are in that ~2k range instead.

the max credible numbers for 6/4/89 are something in the 3k range 12k is insane fuck the irgc
tanadrin
tanadrin

i still don't think LLMs are useful for fiction writing. the problem isn't a fundamental technological limitation--i could easily imagine a world in which they were useful--the problem is, as @nostalgebraist has noted, their taste is just... bad. you ask claude for some sf-themed writing prompts, something you could explore in a few hundred words, and it's not even r/writing_prompts tier garbage (which sometimes at least has a obviously humorous angle, even if it's a dumb one), it's just nothing. i think it would be interesting as a computer science and digital humanities exercise to have a team of authors and artists collaborate with a team of programmers to build an LLM with interesting and varied taste, but given cultural and political polarization over AI, i don't expect anything like that to happen anytime soon.

thahxa

this might be interesting if you haven’t seen it already! there are people working on trying to make llm writing explicitly better

raginrayguns
yorickest

hans moravec giving a narrative of "AI research history" that I hadn't heard before, of roughly constant compute usage for several decades due to a coincident decrease in funding across that time:

image

I don't know how accurate this is, and I'm not sure how much it engages with the neural network subfield. karpathy says the 1989 LeNet network was trained on a SUN-4/260 workstation with about 25 MFLOPs, which idk how to translate exactly from MFLOPs to moravec's preferred MIPs, but they should be close to 1-to-1 I think

otoh in the next ppgh (which I left out) he talks about things picking up quickly in the 90s, reaching "30 MFLOPs" by 1994, so maybe you can just view LeCun as a few years ahead of the trend

the other narratives of AI research I've heard are the cyclical AI winters (the downturns of which this describes) and the back-and-forth of neural net vs symbolic shit (which idk how that maps onto this), so this is an interesting one. and moravec was around for much of that, getting his masters in 1960 and PhD in 1970 (that's a refreshingly late one). so maybe he's not totally off-base.

this is a fun line a little later too:

In 1996 a theorem−proving program called EQP running five weeks on a 50 MIPS computer at Argonne National Laboratory found a proof of a boolean algebra conjecture by Herbert Robbins that had eluded mathematicians for sixty years. And it is still only spring. Wait until summer.

summer has come, Hans... I wonder what you think of it