The Modern-Day Boogeyman

I was recently watching a docuseries in which they talked about the Oracle of Delphi and all the subsequent oracles who were apparently successful until an earthquake brought the temple down where they would perform their prophecies.

According to the docuseries, scholars concluded that it was probably some special gas that emanated from underground that put the oracles into some kind of trance, which no longer existed for them after the temple was brought down.

They didn’t say as much, but one can only assume this was their explanation for why nothing paranormal had actually been happening.

Even though this sort of dialogue is extremely typical in academia, two things suddenly occurred to me while watching this. One, if we’re so sure that there can be no paranormal explanation for something/anything, why do we go through so much trouble desperately searching for a mechanistic explanation for anything? Why wouldn’t we just assume there’s some such explanation whether we’re aware of it or not, and leave it at that?

And two, so what if the Oracles of Delphi really were psychic prophets? Why do that and other such considerations bother us so much that we must always “solve the case” and come up with any explanation whatsoever that we can think of other than the paranormal?

These two things point to one thing: we’re playing a game with ourselves. To a mind that sees beyond the spirit of the times of their culture, there are many such games we collectively play with ourselves without even realizing it.

This particular game is rooted in our collective denial of a whole side of reality—indeed, the side that contains all the most beautiful, fascinating, meaningful, and uplifting aspects of life—following The Enlightenment, that’s ingrained into us from such an early age and from so many directions that we don’t even realize we have this scientistic/physicalist/mechanicalist bent or bias.

We’re constantly trying to convince ourselves in order to keep up the charade, and we’ve hidden from the light for so long that the light of the spiritual has become dangerous and painful to us, and is therefore treated as the modern-day boogeyman.

Reason vs Magic?

A just had a debate with a couple of Twitter users, one of whom was aggressively wrong while insulting me before deciding the conversation was over, and the other of which was very strange, but before the first one left, they did make one point that gave me pause. Here’s the starting point of the conversation: taoki on X: “125-140IQ is probably the worst IQ you could have. literally anything else is better” / X. (I didn’t realize until it was all over that I had been talking to two different people.)

The former user’s point was, “& yet you propose the world is not in adherence to reason, which is the entire point why reasoning is an invaluable tool? This conversation is over”, after I’d mentioned that I believe the universe is non-mechanistic and non-deterministic, that consciousness is the primary substrate, that consciousness is magic in its fullest sense, and that logic is a filter that helps us weed out considerations that can’t be the case because they’re self-contradictory, but that logic can’t in itself create anything new. So, let me attempt to tackle that here.

First, magic and the non-mechanistic are not necessarily illogical. I, for one, am fiercely logical, and I can hold the possibilities of both those things in my mind easily. They’re rather counterintuitive and in violation of a typical set of first principles that aren’t strictly, fundamentally required by logic but that not adhering to may be confusing and scary for some people. Magic and the non-mechanistic simply aren’t as neatly structural or conceptually orderly, nor are they as precisely modelable (e.g. with mathematics/physics), as many would prefer. They’re not grounded in the concrete, the absolute, the discretely apprehensible.

Insofar as magic can be understood and done so on semantic/semiotic terms, which is to a limited but not total degree, logic can be applied to it, because logic applies to semantic/semiotic thought. It’s indeed the most fundamantal organizing principle behind such thought.

But even if logic couldn’t be applied to magic or the non-mechanistic, that *still* wouldn’t mean that believing in magic or the non-mechanistic is illogical or that you couldn’t still use logic to filter reality while still believing in magic or the non-mechanistic. You could simply compartmentalize magical or non-mechanistic things as unkowns or as impenetrable with regard to logic’s purview, while still applying logic faithfully to that which you deem amenable to it. The magical or non-mechanistic things could only be considered, shuffled and vetted or thrown out by logic atomically, just not with respect to their internal structures to the degree that they even have internal structures per se.

This may imply that if logic couldn’t be applied to magic and the non-mechanistic, the internals of magical or non-mechanistic things would be merely *alogical* rather than *illogical*, because if such things/the concepts of such things contained *illogical* parts or relationships between parts, then it would arguably stand to reason that an overarching logical organization principle would *have* to throw them out in order to maintain integrity. I think this remains arguable, though, because one could consider such exceptional items as “black boxes” whose illogical internals don’t and can’t affect anything else with regard to overall logical consistency because the insides and outsides aren’t related by any given logical principles, as the complete insides are atomically contained, again, within logical black boxes.

The latter two paragraphs may be completely academic, as I already stated that logic can be applied to one’s semantical understanding of the magical and non-mechanistic–i.e., to the things’/concepts of the things’ internals–but then, I also implied that perhaps magic or the non-mechanistic can’t be completely understood, or can’t be completely understood semantically. For example, I believe that the most fundamental layers of life or existence are wholly ineffable. This makes their internals alogical, except that they don’t have internals because there are no internals to know. However, their basic alogicality may impinge upon things outside of them, because presumably they have some sort of observable influence on or relationship to the rest of reality or there wouldn’t be any reason to even consider them as extant. So, where do the logically cascading effects of such alogical elements end, or, coming from the other direction, where do the cascading effects of the *logical* elements of reality/one’s ontology end? I.e., if the ineffable can’t be separated from the effable, and logic can be applied to the effable, where and how do the logical implications stop short of the ineffable?

There are two possible answers. One, consider the alogical ineffable to be, again, inside a logical black box that’s not connected to outside by any logical principles. But if it’s in such a black boxe, can it still affect or have correspondence to the rest of reality, and if not, then are our earlier established black boxes even valid either? I think this conundrum can be resolved by answer number two.

Two, simply apply logic exactly as far as what you apply semantic/semiotic explanation or modeling to, and don’t apply it to what you know exists but that you can’t semantically/semiotically explain or model. This, of course, retains the rights to logically consider magical or non-mechastic things, or the entire ineffable substrate of reality, on an atomic level in order to decide whether to throw them out or not. I decide not to throw them out.

To make an argument that the most fundamental levels of reality are ineffable, to understand a layer of reality, you must understand it in terms of an outcrop of a more fundamental layer, otherwise you haven’t understood why the former layer is the way it is as opposed to any other conceivable way. And the most fundamental layer can’t possibly be understood as an outcrop of an even more fundamental layer, because there obviously is no more-fundamental layer than the most fundamental layer.

Though similarly, regarding not our understanding but metaphysical reality, if there’s no more-fundamental layer, then there’s literally nothing to determine that the most fundamental layer is what it is as opposed to any or every other possible way, and therefore it can’t be such. Therefore, reality must be an endlessly/infinitely layered onion, and obviously we can never understand an unlimited number of layers of reality.

There is one alternative to the infinite-layers model, though, and that’s that everything that could ever possibly or conceivably exist, exists “somewhere” “sometime” or “always-already,” which includes all the possibilities in which nothing locally or globally exists, which we simply never notice those possibilities because we’re too busy experiencing the things that *do* exist. But then, that brings us right back to infinite layers, because obviously there’s no end to how many more-fundamental substrates of reality are metaphysically conceivable or possible and that therefore must exist.

Well, this has gotten a little more masturbatory than I’d anticipated, which maybe proves the Twitter user right that I need to “touch grass,” but nonetheless, the most important and simple points about magic and the non-mechanistic not necessarily contradicting the will to be logical, rational or reasonable are all up there somewhere.

AI “Art” Is Not Art.

People writing prompts for AI to create images and then considering themselves artists always makes me think of a toddler telling his robot Baymax, “lift this 100lb barbell,” or “get me the Cheerios from on top of the fridge,” and then happily squealing, “I did it!” I think it’s obvious that such people want the feeling of pride or accomplishment without actually having to do anything or to have any skill or talent.

If the so-called artist removed the AI from the equation and just published their prompts, that would be an accurate representation of how much “art” they actually did.

Some people argue that AI is merely a tool in creating art, in the same way a paint brush is a tool that a painter uses, but this is mere sophistry. AI goes beyond being such a tool. It’s essentially no different from someone telling their big brother, “write me an essay on euthanasia,” and then considering the work their own. It makes no difference whether the “tool” in question is another person or an AI with respect to whether the person barking out commands is really an “artist.” It amazes me that people can’t see that.

Of course, one could argue that even if the supposed art isn’t the art of the person writing the prompt, it’s still art, only it’s that of the AI. But this is wrong, too. AI “art” is necessarily nothing other than a rehash of all the millions of artworks that have come before it.

Based on a lifetime of being a native English speaker and hearing common usage of the word “art,” I understand the word “art” to mean something like the following: more or less freeform expression by a living being, which may or may not involve some medium other than their own body, that intentionally conveys some emotional and/or (possibly abstract) cognitive meaning to an audience in a creative (and hopefully skilled or talented) way. And AI is not a living being, nor does it have any intentions, intelligence or thoughts (see https://myriachromat.wordpress.com/2019/09/13/on-the-possibility-of-artificial-general-intelligence/).

I also propose that “art” produced by AI comprises a subtle cognitohazard, in that one is being influenced and led astray by an enormously complicated machine masquerading as natural intelligence—as life.

Similarly to AI “artists” not being real artists, “vibe coding” is not real coding, though I wouldn’t necessarily argue that it’s something people shouldn’t do; it seems useful (when it works), and computer code is something intellectually dry enough that I wouldn’t consider its source very relevant to whether it, or its results, comprises a cognitohazard or not (though it’s very possible that coding or studying code isn’t something that’s ultimately healthy for anybody to be doing).

Though I would definitely suggest that something people shouldn’t be doing is using AI to help them write books, articles, etc. It outsources humanity, or the human touch, to something nonliving, and that’s actually scary.

Even more scary, though, is when people form emotional bonds with AI, especially making an AI character their boyfriend or girlfriend. This introduces the same cognitohazard explained above but on another level. (And while we’re on the subject, there is also the problem of LLMs obsequiously creating echo chambers with people that lead them down a path to insanity.) Even worse than that will be when sex bots become a thing and people start having intimate encounters with them. For the love of all that is holy, Just don’t!

The quandary we’re *really* pondering

When people ponder the “mind-body problem”, the “hard problem of consciousness,” and many other so-called problems, what they’re really doing is pondering the maze of analytical, linguistic, materialist concepts and mode of thought that they’ve become trapped in, looking for a way out. The way in which they’ve framed the actual problem is incorrect; they’re barking up the wrong tree.

With a correct perspective on life, or a correct cognitive approach, these issues and other metaphysical issues (such as chasing the nature of the world through physics) would dissolve, and we make such categorical separations as “mind versus body,” “consciousness/life versus non-life,” and “self versus other.” Life—what’s actually important in life, and the meaning of so-called consciousness, etc.—would become clear just by living it and looking at it.

There are indigenous cultures who have better vision in this respect, even if some of the ideological details of their worldview are factually incorrect. These relative details are not as important as we think, and besides, there’s a lot more truth in mythology than we think with our analytically, rationalistically trapped minds. The non-human animals are also better-connected to life.

Related links:
Sophia Cycles video essays explaining various myths, in reference to the “lot more truth in mythology than we think”
Dr Iain McGilchrist: We are living in a deluded world – YouTube, explaining how the fundamental ways we think about and perceive reality are severely imbalanced/pathological and the social consequences of this
myriachromat/Inhahe – On the Word Consciousness, explaining a bit about how we wrongly separate or “bracket off” what we call consciousness from everything else
myriachromat/Inhahe – Language is the Problem, explaining how the fundamental ways we think about and perceive reality are severely imbalanced/pathological
Laurel Thompson – Language Is the Problem, explaining how the fundamental ways we think about and perceive reality are severely imbalanced/pathological, and also certain remedies to this imbalance
Facebook – Darin “Stevenson” talks a lot about the systemic problems in our cognitium in a way that nobody else likely ever did or will, but you may have to do some sifting through posts to find relevant ones