6300 words of ravings about AIs and rockets.
Permanent location at https://www.datapacrat.com/IO.SYS.html .
I appreciate all the feedback I can get.
Permanent location at https://www.datapacrat.com/IO.SYS.html .
I appreciate all the feedback I can get.
Category Story / Miscellaneous
Species Unspecified / Any
Size 120 x 120px
File Size 6.34 MB
Listed in Folders
Only problem I got with this lil' horror (?) story is the plot point of never looking at the Earth. Strikes me as a little weak. How hard would it be to build a standalone telescope with just enough software installed to answer a single question: Are there people still on Earth or not? Would be nice to know, yes?
> Would be nice to know, yes?
Naturally it would; but if there's some hostile super-intelligence lurking down there, then making that check would mean that if the narrator had previously had a 90% chance of long-term survival, that chance could drop down to 45%. They're the sort to take such numbers seriously, until they figure out enough to come up with better ones.
Naturally it would; but if there's some hostile super-intelligence lurking down there, then making that check would mean that if the narrator had previously had a 90% chance of long-term survival, that chance could drop down to 45%. They're the sort to take such numbers seriously, until they figure out enough to come up with better ones.
Nah, RB is an entirely different sort of hostile-AI thought experiment. (And one that's easily countered by considering how likely an AI with RB-style values is to arise compared to a "Roko's Rooster", one with the opposite value function.) There are plenty of other hypothetical AIs, too, and I'm hoping that some thoughts get spurred about them.
Maybe you mean something more like the Langford Basilisk? The plot device idea of (the fictional) Yudkowsky-Schneier protocols is that the (hypothetical) AI on Earth could (with high probability) produce output which sabotages lower intelligence (human, brain upload, etc). Simply causing a sort of emulated brain shutdown would be bad enough. Now imagine the AI being able to flash its control program into the upload that looked at Earth, causing the upload to take more active roles in leaking information....
That would depend on what purpose you have in making a distinction. If you need to account for the storage space of their brain-state software, then you have to treat them as distinct immediately; if you want to figure out which copies have to go through a legal trial for an act done before the copy was made, then perhaps both will always be fully liable.
That depends if you are going for redundancy in hardware. If not... what storage space? You'd just do it as a copy-on-write system. A new instance would consume no initial space, and grow in size as it diverged from the original. It also means you would be able to quantify *exactly* how distinct an instance is, and watch that number grow over time.
A copy-on-write approach like that assumes that most of a brain undergoes nearly no divergence; for this story, I'm assuming that a brain has enough long-range interconnections that affect each other in a classically-chaotic manner that trying to copy-on-write would lead to a near-immediate full copy.
It's quite possible that a more advanced model of how a brain works would be able to abstract away most of the brain's detail as being irrelevant, and thus would be more amenable to saving space via copy-on-write, but I'm only assuming the development of a simple black-box, model-every-cell brute-force emulation here.
It's quite possible that a more advanced model of how a brain works would be able to abstract away most of the brain's detail as being irrelevant, and thus would be more amenable to saving space via copy-on-write, but I'm only assuming the development of a simple black-box, model-every-cell brute-force emulation here.
Given the demands of such a simulation, one of the first things any such simulation is going to want to do is try to optimise itsself. If you're at a point where there are human-based sims around, there are already going to be thousands of animal sims left over from the research to play with - you might have to spin up a few million mice to train your optimiser on so it can learn how to identify which parts of the brain model are important, and which can be safely dumped.
That's an entirely reasonable approach - as long as you can figure out how to avoid accidentally deleting whatever it is that produces consciousness, leaving a p-zombie that sleepwalks through acting normal without actually experiencing qualia.
On Earth, that was very likely done. However, the protagonist of this story only had a single brain-scan to work with: Their own.
If you like, then feel free to assume that the software running the protagonist's mind already included a preliminary set of such optimizations; it wouldn't significantly affect the plot.
On Earth, that was very likely done. However, the protagonist of this story only had a single brain-scan to work with: Their own.
If you like, then feel free to assume that the software running the protagonist's mind already included a preliminary set of such optimizations; it wouldn't significantly affect the plot.
Since a p-zombie is indistinguishable by definition... there is certainly a lot of philosophical complexity involved in the matter. Of the sort which one trained in engineering might well opt to simply ignore. If program A and program B behave identically, then a straight substitution is surely acceptable. Speculating on the practical concerns of optimising emulated brains is difficult when current knowledge of the field is sufficient to produce only a semi-accurate simulation of a nematode neural network - and that image is not even a scan of a single individual, but a composite made by mapping the structure of multiple worms, and lacking any kind of chemical description of individual synapses. It's like giving someone in the ancient world a few fireworks to play with, then expecting them to discuss the practical details of multi-stage liquid-fuel reignitable rocket engine design.
Anyway, bedtime.
Anyway, bedtime.
FA+

Comments