The confabulation to justify picking out related images that the left brain never observed (chicken and snow shovel in the article) reminds me profoundly of the confident slop produced by LLMs. Make you wonder if llms might be one half of the "brain" of a true AGI
More then half! Human experience is more confabulation than not. The optic nerve's digital-equivalent bandwidth is an estimated 10Mbps, predominantly dedicated to a narrow radius around the center of vision. Everything around the outside of your vision is a few fuzzy pixels that are in-filled with plausible data. Same goes for the blind spot created by the optic disc, which is actually fully fabricated as it has no cones or rods at all!
Key point here is "looks like" I suggest if you want to argue this further to invest the time asking Brain Scientists what they think. Not AI scientists but people who actually work in cognition.
Only if you think computers need to copy the precise neurological "hardware" systems of the brain. However if you think of conciousness as software then I think overlap is highly likely considering both are emergent. That said, this topic is more philosophical than scientific at the end of the day.
The confidence seems to be an artifact of fine tuning. The first instruction trained models were given data sets with answers to questions but generally omitted non answers to things the model didn't know.
Later research showed that models know that they don't know certain pieces of information, but the fine tuning constraint of providing answers did not give them the ability to express that they didn't know.
Asking the model questions against known information can produce a correct/incorrect map detailing a sample of facts that the model knows and does not know. Fine tuning a model to say "I don't know" in response to the those questions where it was incorrect can allow it to generalise the concept to its internal concept of unknown.
It is good to keep in mind that the models we have been playing with are just the first ones to appear. GPT 3.5 is like the Atari 2600. You can get it provide a limited experience for what you want and its cool that you can do it at all, but it is fundamentally limited and far from an ideal solution. I see the current proliferation of models to be like the Cambrian explosion of early 8 bit home computers. Exciting and interesting technology which can be used for real world purposes, but you still have to operate with the knowledge of the limitations forefront in your mind and tailor tasks to allow them to perform the bits they are good at. I have no-idea of the timeframe, but there is plenty more to come. There have been a lot of advances revealed in papers. A huge number of those advances have not yet coalesced into shipping models. When models cost millions to train you want to be using a set of enhancements that play nicely together. Some features will be mutually exclusive. By the time you have analysed the options to find an optimal combination, a whole lot of new papers will be suggesting more options.
We have not yet got the thing for AI that Unix was for computers. We are just now exposing people to the problems that drives the need to create such a thing.
I believe most confident statements people make, are established the same way. There are some anchor points (inputs and vivid memories) and some part of the brain in some stochastic way dreams up connections. Then we convince ourselves that the connections are correct, just because they match some earlier seen pattern or way of reasoning.
The basis of human irrationality is not tied to the basis of LLM irrationality.
LLMs don't get to make value judgements, because they don't "understand". They predict the subsequent points of a pattern given a starting point.
Humans do that, but they also jade their perception with emotive ethics, desires, optimism and pessimism.
It is impossible to say that two humans with the exact same experience would always come to the same conclusion, because two humans will never have the exact same experience. Inputs include emotional state triggered by hormones, physical or mental stress, and so forth, which are often not immediately relevant to any particular decision, but carried over from prior states and biological processes.
Just because humans have additional sources of irrationality doesn't mean they don't also have irrationality based on the same lack of self-awareness that LLMs exhibit.
I could understand that argument as follows: LLMs fill in the gaps in a creative but predictable way. Humans fill in the gaps in creative but unpredictable ways. The creativeness level is affected by the ad hoc state of the brain.
I understand that you relate judgement, ethics and emotions to 'understanding'. I'm not convinced. Emotions might as well be an effect of pattern matching. You hear a (pattern matched) type of song, you feel a certain way.
Conversely, human beings with varying particular experiences can come to the same conclusions, because human cognition can abstract from particulars, while LLMs are, at best, statistical and imagist. No two of us ever experience the same set of triangles, but abstraction allows us to form concepts like "triangularity", which means we can understand what it means to be a triangle per se (something that is not concrete or particular, and therefore cannot be visualized), while an LLM can only proceed based on the concrete and particular data of input triangles and derivations introduced into the model. It can never go "beyond" the surface features of the training model's images, as it were, and where the appearance of having done so occurs, it is not via abstraction, but by way of product of human abstraction. From the LLM's perspective, there is no triangle, only co-occurrence of features, while abstraction goes beyond features, stripping them away to obtain the bare, unimaginable form.
Different LLMs can also come to the same conclusions, that is, predict the same strings of tokens (in the meaning).
Sure, they're a far way from the capacity of humans in terms of short term training. But there is literally nothing that indicates they can't "think" (understand, reason, abstract, whatever word you wanna put in italics) because nobody can explain what it really means, because: it's all just predicting. Text happens to be super useful for us to evaluate certain aspects of predicting.
I always thought it was interesting that the human brain grew relatively quickly in evolutionary history. 3 million years ago, our ancestors had a 400 cc brain. 2.5 million years later, it was 1,400 ccs--more than 3 times larger.
That implies to me that a larger brain immediately benefited our ancestors. That is, going from 400 to 410 ccs had evolutionary advantage and so did 410 to 420, etc.
That implies that once the brain architecture was set, you could increase intelligence through scale.
I bet there are some parallels to current AI there.
This comment reminded me of "A Thousand Brains: A New Theory of Intelligence" by Jeff Hawkins, which explores this. To Hawkins, the brain's relatively fast evolution implies there's a general-purpose "compute" unit that, once it evolved once, could proliferate without novel evolutionary design. He claims this unit is the brain's cortical column, and provides a lot of interesting evidence and claims that I no longer remember :)
The fact that the explaining part of the brain fills in any blanks in a creative manner (you need the shovel to clean the chicken shed), reminds me to some replies of LLMs.
I once provided an LLM the riddle of the goat, cabbage and wolf, and changed the rules a bit. I prompted that the wolf was allergic to goats (and hence would not eat them). Still the llm insisted on not leaving them together on the same river bank, because the wolf would otherwise sneeze and scare the goat away.
My conclusion was that the llm solved the riddle using prior knowledge plus creativity, instead of clever reasoning.
I believe LLMs are entirely analogous to the speech areas of the brain. They have a certain capacity of speaking automatically, reflexively, without involving (other) memory for example. That is how you are able to deliver quippy answers, that is where idioms "live". You can see this in people with certain kinds of brain damage, if they are unable to recall certain memories (or sometimes if you press somebody to recall memories that they don't have) they will construct elaborate stories on the spot. They won't even notice that they are making it up. This is called confabulation, and I think it is a much better term than hallucination for what LLMs do when they make up facts.
I feel this analogy confirmed by the fact that chain of thought works so well. That is what (most?) people do when they actively "think" about a problem. They have a kind of inner monologue.
Now, we have already reached the point that LLMs are much smarter than the language areas of humans - but not always smarter than the whole human. I think the next step towards AGI would be to add other "brain areas". A limbic system that remembers the current emotion and feeds it as an input into the other parts. We already have dedicated vision and audio AIs. Maybe we also need a system for logical reasoning.
>Miller’s study uses a test called the “trait-judgment task”: A trait like happy or sad flashes on a screen, and research subjects indicate whether the trait describes them. Miller has slightly modified this task for his split-brain patients—in his experiments, he flashes the trait on a screen straight in front of the subject’s gaze, so that both the left and right hemispheres process the information. Then, he quickly flashes the words “me” and “not me” to one side of the subject’s gaze—so that they’re processed only by one hemisphere—and the subject is instructed to point at the trait on the screen when Miller flashes the appropriate descriptor.
Seems to me (not a neuroscientist) like there's a flaw in that experiment: how would the right hemisphere understand the meaning of the words, if language is only processed by the left? I also recall reading that the more "primitive" parts of our brains don't have a concept of negation.
But maybe they have been considering this and it's no issue?
i haven't had any split brain operation done or anything but my personal experience as someone with dissociative identity disorder is that i can usually tell which parts are more left or right brained based on how they react to everyday events
for our particular brain , the logical ones usually have immediate reactions that suck (like borderline personality disorder) and the emotional ones tend to have mish mash wordses feelings
(idk how to write that it's like words-es, the s at the end is very gender for emotional feelings words so all of them tend to have it)
I disagree with Steven Pinker’s claim that consciousness arises from the brain.
This perspective fails to establish that the brain produces consciousness, as it relies on the mistaken assumption that "mind" and "consciousness" are interchangeable. While brain activity may influence the mind, consciousness itself could be a more fundamental aspect of reality. Rather than generating consciousness, the brain might function like a radio, merely receiving and processing information from an all-pervasive field of consciousness.
In this view, a split-brain condition would not create two separate consciousnesses but instead allow access to two distinct streams of an already-existing, universal consciousness.
Postulate 1: The music is created by the radio in the form of sound waves, the end.
Postulate 2: The music was played by a band in the form of sound waves, some time in the past. The band recorded their music on to some storage medium so that it could be transmitted to the future. In the present, the storage medium is connected up to a piece of equipment that turns the recorded signal into some invisible power transmission that spreads throughout space in a way you can't experience directly with any of your natural senses. The radio however can sense these invisible power transmissions and can turn them back into audio that sounds like what the band played in the past. So we're saying that it is possible to create music in the form of sound waves (that's what the band did), and it is possible for the radio to output sound waves that sound like music (that's what the radio does), but the radio is curiously not the thing that is producing music and instead we have an enormous system of technology transmitting the music across space and time.
You'd need an awful lot of evidence to convince me that postulate 2 is true and postulate 1 is false.
On the one hand you have "consciousness can be created, and it is created by the brain". On the other hand you have "consciousness can be created, and it is created somewhere, but it's not created by the brain, instead it is created somewhere else and there is a system of consciousness transmission that gets it into the brain".
There's just no reason to prefer the second explanation. It is a more complicated story.
And despite looking for them intensely, we have never found any evidence of the existence of radio waves, or been able to send a signal to a radio ourselves.
I tend to agree, but it doesn't fully explain Benj Hellie's vertiginous question [1]. Everyone seems to have brains, but for some reason only I am me.
If we were able to make an atom-by-atom accurate replica of your brain (and optionally your body, too), with all the memories intact, would you suddenly start seeing the world from two different pair of eyes at the same time? If no, why? What would make you (the original) different from your replica?
I feel like this is just a totally stupid question.
The brain has inputs, internal processing, and outputs. The conscious experience happens within the internal processing.
If you make a second copy, then that second copy will also have conscious experience, but it won't share any inputs or outputs or internal state with the first copy.
If you were to duplicate your computer, would the second computer share a filesystem with the first one? No. It would have a copy of a snapshot-in-time of the first computer's filesystem, but henceforth they are different computers, each with their own internal state.
You could argue that there are ways to do it which make it unclear which is the "original" computer and which is the "copy". That's fine, that doesn't matter. They both have the same history up to the branching point, and then they diverge. I don't see the problem.
When you replace "I" with "it" (as in your example with computers) the question becomes meaningless and stupid. As an outside observer both computers are the same, as they act exactly the same way, therefore there is no question. That is actually the "egalitarian" view in Benj Hellie's paper [1]:
> The ‘god’s eye’ point of view taken in setting up the egalitarian metaphysics does not correspond to my ‘embedded’ point of view ‘from here’, staring out at a certain computer screen.
The vertiginous question (or Nagel's Hard Problem [2] to a degree: Why does physical brain activity produce a first-person perspective at all?) is about the subjectivity of consciousness. I see the world through my eyes, therefore there is only one "I" while there are infinitely many others.
The duplication example was something I made up to explain the concept, but to reiterate, if I could make a perfect copy of me, why would I still see the world from the first copy's eyes and not the second, if the physical structure of the brain defines "me"? What stops my consciousness from migrating from the first body to the second, or both bodies from having the same consciousness? Again, this question is meaningless when we are talking about others. It is a "why am I me" question and cannot be rephrased as "why person X is not person Y".
Obviously we don't have the capacity to replicate ourselves, but I, as a conscious being, instinctively know (or think) that I am always unique, regardless of how many exact copies I make.
As I mentioned in another comment, I don't have a formal education on philosophy, so I am probably doing a terrible job trying to explain it. This question really makes sense when it clicks, so I suggest you to read it from a more qualified person's explanation.
> if I could make a perfect copy of me, why would I still see the world from the first copy's eyes and not the second, if the physical structure of the brain defines "me"? What stops my consciousness from migrating from the first body to the second, or both bodies from having the same consciousness?
If you define consciousness as the stream of perceived experiences coming from the physical body (sights, sounds, touch, and even thoughts, including even the thought that you're in control), it's expected each body would have its own consciousness? The OP article about split-brain experiments also (very counterintuitively) indicates that at least some thoughts are perceived rather than something you're actively doing?
Right, yes: Why does physical brain activity produce a first-person perspective?
We might ask "what else do we expect it to do?" A second person perspective makes even less sense. And since the brain's activity entails first-person-perspective-like processing, the next most obvious answer, no perspective at all, isn't plausible either. It's reasonable that the brain would produce a first person perspective as it thinks about its situation. (And you don't have extend this to objects that don't think, by the way, if you were thinking of doing that.)
But I'm still left with the impression that there's an unanswered question which this one was only standing in for. The question is probably "what is thinking, anyway?".
Or, something quite different: "Why don't I have the outside observer point of view?". It's somehow difficult to accept that when there are many points of view scattered across space (and time), you have a specific one, and don't have all of them: "why am I not omniscient?". It's egotistical to expect not to have a specific viewpoint, and yet it seems arbitrary (and thus inexplicable) that you do have one. But again, the real question is not "why is this so?" but "why does this seem like a problem?".
Personally, my answer to “why am I me” is similar to the anthropic principle. If you were anyone else, you would be asking the exact same question, and if you were nobody, you would not be able to ask the question. By asking the question, you must necessarily be somebody, and the question would be the same no matter which somebody.
Your answer works when you are observing the person(s) from outside, referring to a third person (A and B are both conscious, so it doesn't matter which one is which). However, it doesn't answer when one of the subjects is "I", because I and everyone else is clearly different (hence the title of Benj Hellie's paper I linked above: Against Egalitarianism):
> The ‘god’s eye’ point of view taken in setting up the egalitarian metaphysics does not correspond to my ‘embedded’ point of view ‘from here’, staring out at a certain computer screen. The god’s eye mode of presentation of the Hellie-subject and the embedded mode of presentation of myself are different: as different as the manifest and scientific modes of presentation of water—indeed, perhaps even more so: that is the core of the Humean worry. So it is not a priori that any of those subjects is exactly the same thing as me. And if not, if I am told that it is this one that is me, I want to know why that is.
I don’t understand how this refutes physicalism. Only my eyes are hooked up to my brain. If you duplicate the whole system there would be a duplicate that would begin experiencing its own version of reality.
> I don’t understand how this refutes physicalism.
Maybe it doesn't and there is a plausible explanation, that's why it has been an unanswered question. But it's definitely an astonishing question.
You instincitively say that even if you duplicate the whole system "you" would remain as "you" (or "I", from your point of view), and the replica would be someone else. In this context you claim that there is a new consciousness now, but there was supposed to be one, because our initial assumption was consciousness == brain.
You are right if you define consciousness as being able to think, but when you define it as what makes you "you", then it becomes harder to explain who the replica is. It has everything (all the neurons) that makes you "you", but it is still not "you".
The above may not make sense as it is difficult for a layman such as me to explain the vertiginous question to someone else. I suggest you to read the relevant literature.
Say I walk into a machine, and then I walk out, and also an exact duplicate walks out of a nearby chamber. My assumption is that we’d both feel like “me”. One of us would have the experience of walking into the machine and walking out again, and the other would have the experience of walking into the machine and being teleported into the other chamber.
Im probably lacking in imagination, or the relevant background, but I’m having trouble thinking of an alternative.
You assume that both would feel like you, but there is no way you can prove it. The other can be a philosophical zombie [1] for all you know.
Would the "current you" feel any different after the duplication? Most people, including me, would find this counterintuitive. What happens if the other you travels to the other end of the world? What would you see? The question is not how the replica would think and act from an outside observer's perspective, but would it have the same consciousness as you. Would you call the replica "I"?
Or to make it more complex, what would happen if you save your current state to a hard disk, and an exact duplicate gets manufactured 100 years after you die, using the stored information?
Like GP, I feel that I might be imagining imagination here, but I really don't follow what this is supposed to reveal.
>Would you call the replica "I"?
The two would start out identical and immediately start to diverge like twins. They would share memories and personality but not experience? What am I missing here?
I understand what the author means, though I struggle to express it as well. The best I can come up with is this: What defines I? Is it separated from "I" and if so how? Or does I merely appears that way because our perspective is informed by our limited being?
It seems to me that this ascribes an existence to “I” that is separate from the brain; with no evidence for this existence, that makes it mystical/magical thinking, a.k.a. superstition.
Not really. The "vertiginous question" is just that, a question. We can't call a question superstition because we don't have a good answer for it yet.
For example, we can't call the question "why does gravity exist" superstition either. It's a valid question. We can feel the gravity, measure it, and forecast it, therefore it exists, but we still don't have a concrete answer as to what causes it. We don't assume that there is a metaphysical explanation, but we don't know the actual answer either. Similarly, the vertiginous question is a meaningful question, even though we don't have an answer.
Oh yes if the question is if the duplicate is also _me_ then I understand the concern. That’s a much more complicated question. But when it comes to perspective it’s easy to answer. Which I guess is literally what the wiki page says it makes more sense as you state it though.
Thanks for the additional explanation. I have read a good deal from Nagel to Chalmers and somehow missed this particular question.
> I have read a good deal from Nagel to Chalmers and somehow missed this particular question.
Chalmers' "Hard Problem" is very similar, although not exactly the same. My understanding is that it asks "why is there something called consciousness at all", as in, a robot doesn't have the notion of "I", but for some reason we do. The question is hard because it is hard to explain it only by our brains being more complex than a robot's CPU. Hellie's question is "why am I me and not someone else".
Yes, the two of you would see through two pairs of eyes, independently.
Both of you would be you, and you two would function separately, occupy separate spaces, and diverge slightly in ways that would only rarely make a difference to your personality.
But that's not the vertiginous question, which is "why am I me". I've wondered that before. However, it is nonsense. Naturally a person is that person, not some other person (and a tree is a tree, not some other tree). There's nothing strange about this. Why would it be otherwise? So the urge to ask the question really reveals some deep-seated misconception, or some other question that actually makes sense, and I wonder what that is.
I wonder if the origin of the question is the religious idea of a separate immortal soul which popped into this body and not into some other body - but in some way could have. This concept is in popular discourse like “what if I had been born in Italy in 1420?!” as if that were a thing thats plausible - an “I” separate from this body/place/time/life experiences/memories/language/family/etc but somehow still ‘me’.
Boring materialism view is that a brain with genetics mixed from my parents and raised in the way I was raised, with the experiences I had here and in this time, is what makes “me” and I couldn’t be anywhere or anyone else.
Or another way, we are all everyone else - what it would be like if I was born to your parents and raised like you is … you. What you would be like here is… me.
> What would make you (the original) different from your replica?
You’d be in two different locations, have independent experiences, and your world lines would quickly diverge. Both of you would remember a common past.
How do you know when you wake up in the morning that you are the same “I” as you remember from the previous day? Who isn’t to say that the universe didn’t multiply while you were asleep, and now there are two or more of you waking up?
It's quite obvious given all available information consciousness arises from the brain. When someone talks as if it doesn't arise from the brain they are not choosing the most rational and obvious hypothesis. They are most likely trying to scaffold an explanation to fit a more biased spiritual world view where consciousness comes from this made up thing called spirit. Usually these people believe in something called religion which is an old world view of made up stories created in a time where humanity didn't understand things as much.
Don't push the argument. It's not coming from a place of rationality even though he's deliberately not using the word "spirit".
It's not Steven Pinker's claim alone. Gazzaniga agrees, I think, and I know of one other prominent neuroscientist but don't remember his name. Pinker is "just" a psychologist.
(Edit: Michael Graziano is who I was trying to remember - he uses the words "schematic" and "model")
Your view is called "pan-psychism". It's interesting, but there isn't anything that makes it necessary. Everything we're finding out is that most or all thinking happens outside of consciousness, and the results bubble up into it as perception. Consciousness does seem to be universal within the brain, though.
I find pan-psychism interesting just because of its popularity - people want something spiritual, knowingly or not. I would advise not to insist that consciousness==soul, however, as neuroscience seems to be rapidly converging on a more mundane view of consciousness. It's best to think of one's "true" self according to the maxim that there is much more to you than meets the mind's eye.
This would imply that the behavior of elementary particles in the brain (which ultimately cause our observable behavior via nerve signals and muscle movements, including the texts we are typing or dictating here) differs from the one predicted by the known physical laws. That’s difficult to reconcile with the well-confirmed fundamental physical theories, and one has to wonder why nobody tries to experimentally demonstrate such known-physical-laws-contradicting behavior. It would be worth at least one Nobel Prize.
Secondly, it wouldn’t really explain anything. The “consciousness field” would presumably obey some kind of natural laws like the known fields do, but the subjective experience of consciousness would remain as mysterious as before (for those who do find it mysterious).
I cannot see how one might perform an experiment to determine which concept is correct. As with most things which are unfalsifiable, the idea can be amusing for a bit but is ultimately not useful to the extent that you can do anything about it. You cannot serve tea from Russell's Teapot.
If the brain is a receiver, information transfer could happen non-locally and the tea might be telepathy, precognition, or remote viewing. In the split brain example, demonstrating an ability to coordinate between hemispheres in ways not predicted by neural separation might challenge the physical origin of consciousness as with the chicken and shovel anecdote.
Experiments demonstrating an external source of consciousness would be very interesting.
Suppose you do all kinds of studies and not show any telepathy, precog, or remote viewing. You could still say that the brain was only a receiver. None of that would disprove the "brain-as-consciousness-receiver" concept, you would just say that, I guess it is one way, no telepathy.
Yep, some unfinished philosophy if you're into it: you can imagine that our universe at a moment of time has is just a giant geometric shape, then at the next moment the universe somehow changes into the this new shape. How does this change happen? Some believe it's a computation according to a rule/s, some that it's not a discrete change but a continuous equation that changed the shape of the universe from one to another. Basically you can imagine the whole universe as a long-exposure photography in 3d and then there is some process that "forgets" almost all of it leaving only slim slices of geometry and changing from one slice into another. This forgetting of the current slice and "recalling" the next, is consciousness, the time-like process. And it looks like the Big Bang was like matter converted to energy (or "space converted to time") process. The final falling into a giant black hole will be the reverse: energy converted to matter (or "time converted to space"). Some say electrons are like small black holes, so we potentially experience the infinitesimal qualia of coming into existence and coming out of existence, because we are sufficiently "time-like" and not too much "space-like". I'll soon write a blog post ;)
It's not a dualism at all. What the OP is proposing is similar to Spinoza (probably the most hardcore monist to ever exist), where mind is a fundamental property of the universe (in fact, there's only one mind) and each individual person is a 'mode' of it.
It's effectively akin to talking about mass. Despite the fact that mass is observable as a distinct phenomenon in any object, it's obviously not accurate to say that you "produce mass" or that it's "your mass" in some private, ontologically separated way, it just appears that way, by definition if we look at particular manifestations of it.
I've had numberous LLMs tell me that humans are conscious because we are like radio receivers, picking up a single consciousness field of the universe itself.
It's certainly an inspiration for the Zizian cult, a group of vegan computer programmers currently being investigated by US authorities for a string of murders across the US. [0, 1]
> LaSota believed that humans have two minds, a left and right hemisphere, and each hemisphere can be good or evil, according to posts on her blog. Eventually, LaSota came to believe that only very few people — she among them — are double good. [1]
https://web.archive.org/web/20241228215151/https://www.theat...
https://archive.ph/gJ32A
The confabulation to justify picking out related images that the left brain never observed (chicken and snow shovel in the article) reminds me profoundly of the confident slop produced by LLMs. Make you wonder if llms might be one half of the "brain" of a true AGI
More then half! Human experience is more confabulation than not. The optic nerve's digital-equivalent bandwidth is an estimated 10Mbps, predominantly dedicated to a narrow radius around the center of vision. Everything around the outside of your vision is a few fuzzy pixels that are in-filled with plausible data. Same goes for the blind spot created by the optic disc, which is actually fully fabricated as it has no cones or rods at all!
That's a theme in the novel "Neuromancer".
It certainly looks like what LLMs are doing is one aspect of what a brain is doing.
Key point here is "looks like" I suggest if you want to argue this further to invest the time asking Brain Scientists what they think. Not AI scientists but people who actually work in cognition.
(Not a brain scientist btw)
Only if you think computers need to copy the precise neurological "hardware" systems of the brain. However if you think of conciousness as software then I think overlap is highly likely considering both are emergent. That said, this topic is more philosophical than scientific at the end of the day.
along those lines, maybe dreaming is piecing together new adventures imagined from snippets of reality.
Those videos about ai making up a game after having watched countless hours of streaming is fucked up. It looks completely how dreams do
Do you have a link?
confident slop.
The confidence seems to be an artifact of fine tuning. The first instruction trained models were given data sets with answers to questions but generally omitted non answers to things the model didn't know.
Later research showed that models know that they don't know certain pieces of information, but the fine tuning constraint of providing answers did not give them the ability to express that they didn't know.
Asking the model questions against known information can produce a correct/incorrect map detailing a sample of facts that the model knows and does not know. Fine tuning a model to say "I don't know" in response to the those questions where it was incorrect can allow it to generalise the concept to its internal concept of unknown.
It is good to keep in mind that the models we have been playing with are just the first ones to appear. GPT 3.5 is like the Atari 2600. You can get it provide a limited experience for what you want and its cool that you can do it at all, but it is fundamentally limited and far from an ideal solution. I see the current proliferation of models to be like the Cambrian explosion of early 8 bit home computers. Exciting and interesting technology which can be used for real world purposes, but you still have to operate with the knowledge of the limitations forefront in your mind and tailor tasks to allow them to perform the bits they are good at. I have no-idea of the timeframe, but there is plenty more to come. There have been a lot of advances revealed in papers. A huge number of those advances have not yet coalesced into shipping models. When models cost millions to train you want to be using a set of enhancements that play nicely together. Some features will be mutually exclusive. By the time you have analysed the options to find an optimal combination, a whole lot of new papers will be suggesting more options.
We have not yet got the thing for AI that Unix was for computers. We are just now exposing people to the problems that drives the need to create such a thing.
I believe most confident statements people make, are established the same way. There are some anchor points (inputs and vivid memories) and some part of the brain in some stochastic way dreams up connections. Then we convince ourselves that the connections are correct, just because they match some earlier seen pattern or way of reasoning.
The basis of human irrationality is not tied to the basis of LLM irrationality.
LLMs don't get to make value judgements, because they don't "understand". They predict the subsequent points of a pattern given a starting point.
Humans do that, but they also jade their perception with emotive ethics, desires, optimism and pessimism.
It is impossible to say that two humans with the exact same experience would always come to the same conclusion, because two humans will never have the exact same experience. Inputs include emotional state triggered by hormones, physical or mental stress, and so forth, which are often not immediately relevant to any particular decision, but carried over from prior states and biological processes.
Just because humans have additional sources of irrationality doesn't mean they don't also have irrationality based on the same lack of self-awareness that LLMs exhibit.
I could understand that argument as follows: LLMs fill in the gaps in a creative but predictable way. Humans fill in the gaps in creative but unpredictable ways. The creativeness level is affected by the ad hoc state of the brain.
I understand that you relate judgement, ethics and emotions to 'understanding'. I'm not convinced. Emotions might as well be an effect of pattern matching. You hear a (pattern matched) type of song, you feel a certain way.
Conversely, human beings with varying particular experiences can come to the same conclusions, because human cognition can abstract from particulars, while LLMs are, at best, statistical and imagist. No two of us ever experience the same set of triangles, but abstraction allows us to form concepts like "triangularity", which means we can understand what it means to be a triangle per se (something that is not concrete or particular, and therefore cannot be visualized), while an LLM can only proceed based on the concrete and particular data of input triangles and derivations introduced into the model. It can never go "beyond" the surface features of the training model's images, as it were, and where the appearance of having done so occurs, it is not via abstraction, but by way of product of human abstraction. From the LLM's perspective, there is no triangle, only co-occurrence of features, while abstraction goes beyond features, stripping them away to obtain the bare, unimaginable form.
Different LLMs can also come to the same conclusions, that is, predict the same strings of tokens (in the meaning).
Sure, they're a far way from the capacity of humans in terms of short term training. But there is literally nothing that indicates they can't "think" (understand, reason, abstract, whatever word you wanna put in italics) because nobody can explain what it really means, because: it's all just predicting. Text happens to be super useful for us to evaluate certain aspects of predicting.
All of this is way above my paygrade, however.. There exists this work by Julian Jaynes called The Origin of Consciousness in the Breakdown of the Bicameral Mind: https://ia802907.us.archive.org/32/items/The_Origin_Of_Consc...
Seems pertinent, and now I will try to read it again. Perhaps it will be useful for reference by others.
Its a very interesting theory, either he's a genius or the theory is completely insane. I cannot decide which one.
I always thought it was interesting that the human brain grew relatively quickly in evolutionary history. 3 million years ago, our ancestors had a 400 cc brain. 2.5 million years later, it was 1,400 ccs--more than 3 times larger.
That implies to me that a larger brain immediately benefited our ancestors. That is, going from 400 to 410 ccs had evolutionary advantage and so did 410 to 420, etc.
That implies that once the brain architecture was set, you could increase intelligence through scale.
I bet there are some parallels to current AI there.
This comment reminded me of "A Thousand Brains: A New Theory of Intelligence" by Jeff Hawkins, which explores this. To Hawkins, the brain's relatively fast evolution implies there's a general-purpose "compute" unit that, once it evolved once, could proliferate without novel evolutionary design. He claims this unit is the brain's cortical column, and provides a lot of interesting evidence and claims that I no longer remember :)
The fact that the explaining part of the brain fills in any blanks in a creative manner (you need the shovel to clean the chicken shed), reminds me to some replies of LLMs.
I once provided an LLM the riddle of the goat, cabbage and wolf, and changed the rules a bit. I prompted that the wolf was allergic to goats (and hence would not eat them). Still the llm insisted on not leaving them together on the same river bank, because the wolf would otherwise sneeze and scare the goat away.
My conclusion was that the llm solved the riddle using prior knowledge plus creativity, instead of clever reasoning.
I believe LLMs are entirely analogous to the speech areas of the brain. They have a certain capacity of speaking automatically, reflexively, without involving (other) memory for example. That is how you are able to deliver quippy answers, that is where idioms "live". You can see this in people with certain kinds of brain damage, if they are unable to recall certain memories (or sometimes if you press somebody to recall memories that they don't have) they will construct elaborate stories on the spot. They won't even notice that they are making it up. This is called confabulation, and I think it is a much better term than hallucination for what LLMs do when they make up facts.
I feel this analogy confirmed by the fact that chain of thought works so well. That is what (most?) people do when they actively "think" about a problem. They have a kind of inner monologue.
Now, we have already reached the point that LLMs are much smarter than the language areas of humans - but not always smarter than the whole human. I think the next step towards AGI would be to add other "brain areas". A limbic system that remembers the current emotion and feeds it as an input into the other parts. We already have dedicated vision and audio AIs. Maybe we also need a system for logical reasoning.
>Miller’s study uses a test called the “trait-judgment task”: A trait like happy or sad flashes on a screen, and research subjects indicate whether the trait describes them. Miller has slightly modified this task for his split-brain patients—in his experiments, he flashes the trait on a screen straight in front of the subject’s gaze, so that both the left and right hemispheres process the information. Then, he quickly flashes the words “me” and “not me” to one side of the subject’s gaze—so that they’re processed only by one hemisphere—and the subject is instructed to point at the trait on the screen when Miller flashes the appropriate descriptor.
Seems to me (not a neuroscientist) like there's a flaw in that experiment: how would the right hemisphere understand the meaning of the words, if language is only processed by the left? I also recall reading that the more "primitive" parts of our brains don't have a concept of negation.
But maybe they have been considering this and it's no issue?
Related: You Are Two by GCP Grey: <https://www.youtube.com/watch?v=wfYbgdo8e-8>
If one is interested in hemisphere theory, including psychological and philosophical implications, make sure to check out the work of Ian McGilchrist:
https://www.youtube.com/watch?v=3V3_Y_FuMYk
i haven't had any split brain operation done or anything but my personal experience as someone with dissociative identity disorder is that i can usually tell which parts are more left or right brained based on how they react to everyday events
for our particular brain , the logical ones usually have immediate reactions that suck (like borderline personality disorder) and the emotional ones tend to have mish mash wordses feelings
(idk how to write that it's like words-es, the s at the end is very gender for emotional feelings words so all of them tend to have it)
I disagree with Steven Pinker’s claim that consciousness arises from the brain.
This perspective fails to establish that the brain produces consciousness, as it relies on the mistaken assumption that "mind" and "consciousness" are interchangeable. While brain activity may influence the mind, consciousness itself could be a more fundamental aspect of reality. Rather than generating consciousness, the brain might function like a radio, merely receiving and processing information from an all-pervasive field of consciousness.
In this view, a split-brain condition would not create two separate consciousnesses but instead allow access to two distinct streams of an already-existing, universal consciousness.
If consciousness doesn't arise from the brain, it seems to be suspiciously well correlated with the brain.
I think consciousness arises from the brain.
"If the music I dance to doesn't arise from the radio, it seems to be suspiciously well correlated with the radio.
I think the music I dance arises from the radio."
Postulate 1: The music is created by the radio in the form of sound waves, the end.
Postulate 2: The music was played by a band in the form of sound waves, some time in the past. The band recorded their music on to some storage medium so that it could be transmitted to the future. In the present, the storage medium is connected up to a piece of equipment that turns the recorded signal into some invisible power transmission that spreads throughout space in a way you can't experience directly with any of your natural senses. The radio however can sense these invisible power transmissions and can turn them back into audio that sounds like what the band played in the past. So we're saying that it is possible to create music in the form of sound waves (that's what the band did), and it is possible for the radio to output sound waves that sound like music (that's what the radio does), but the radio is curiously not the thing that is producing music and instead we have an enormous system of technology transmitting the music across space and time.
You'd need an awful lot of evidence to convince me that postulate 2 is true and postulate 1 is false.
On the one hand you have "consciousness can be created, and it is created by the brain". On the other hand you have "consciousness can be created, and it is created somewhere, but it's not created by the brain, instead it is created somewhere else and there is a system of consciousness transmission that gets it into the brain".
There's just no reason to prefer the second explanation. It is a more complicated story.
Note that in this scenario, we’ve never even heard of radio stations or radio waves before.
And despite looking for them intensely, we have never found any evidence of the existence of radio waves, or been able to send a signal to a radio ourselves.
Well, it must all come from a singularity some time before the Big Bang.
Yet, when I turn the radio on, music really does seem to come out of it.
And when I turn the radio off, the music stops (for me, but not for you).
Without the radio there is no sound, but the radio needs a signal.
Does the radio make the music? Quite an interesting metaphor.
Yes ! I like it even more when you consider the brainwaves that deal with...frequency...hmm...
> I think consciousness arises from the brain.
I tend to agree, but it doesn't fully explain Benj Hellie's vertiginous question [1]. Everyone seems to have brains, but for some reason only I am me.
If we were able to make an atom-by-atom accurate replica of your brain (and optionally your body, too), with all the memories intact, would you suddenly start seeing the world from two different pair of eyes at the same time? If no, why? What would make you (the original) different from your replica?
[1] https://en.wikipedia.org/wiki/Vertiginous_question
I feel like this is just a totally stupid question.
The brain has inputs, internal processing, and outputs. The conscious experience happens within the internal processing.
If you make a second copy, then that second copy will also have conscious experience, but it won't share any inputs or outputs or internal state with the first copy.
If you were to duplicate your computer, would the second computer share a filesystem with the first one? No. It would have a copy of a snapshot-in-time of the first computer's filesystem, but henceforth they are different computers, each with their own internal state.
You could argue that there are ways to do it which make it unclear which is the "original" computer and which is the "copy". That's fine, that doesn't matter. They both have the same history up to the branching point, and then they diverge. I don't see the problem.
When you replace "I" with "it" (as in your example with computers) the question becomes meaningless and stupid. As an outside observer both computers are the same, as they act exactly the same way, therefore there is no question. That is actually the "egalitarian" view in Benj Hellie's paper [1]:
> The ‘god’s eye’ point of view taken in setting up the egalitarian metaphysics does not correspond to my ‘embedded’ point of view ‘from here’, staring out at a certain computer screen.
The vertiginous question (or Nagel's Hard Problem [2] to a degree: Why does physical brain activity produce a first-person perspective at all?) is about the subjectivity of consciousness. I see the world through my eyes, therefore there is only one "I" while there are infinitely many others.
The duplication example was something I made up to explain the concept, but to reiterate, if I could make a perfect copy of me, why would I still see the world from the first copy's eyes and not the second, if the physical structure of the brain defines "me"? What stops my consciousness from migrating from the first body to the second, or both bodies from having the same consciousness? Again, this question is meaningless when we are talking about others. It is a "why am I me" question and cannot be rephrased as "why person X is not person Y".
Obviously we don't have the capacity to replicate ourselves, but I, as a conscious being, instinctively know (or think) that I am always unique, regardless of how many exact copies I make.
As I mentioned in another comment, I don't have a formal education on philosophy, so I am probably doing a terrible job trying to explain it. This question really makes sense when it clicks, so I suggest you to read it from a more qualified person's explanation.
[1] http://individual.utoronto.ca/benj/ae.pdf
[2] https://consc.net/papers/facing.pdf
> if I could make a perfect copy of me, why would I still see the world from the first copy's eyes and not the second, if the physical structure of the brain defines "me"? What stops my consciousness from migrating from the first body to the second, or both bodies from having the same consciousness?
If you define consciousness as the stream of perceived experiences coming from the physical body (sights, sounds, touch, and even thoughts, including even the thought that you're in control), it's expected each body would have its own consciousness? The OP article about split-brain experiments also (very counterintuitively) indicates that at least some thoughts are perceived rather than something you're actively doing?
Right, yes: Why does physical brain activity produce a first-person perspective?
We might ask "what else do we expect it to do?" A second person perspective makes even less sense. And since the brain's activity entails first-person-perspective-like processing, the next most obvious answer, no perspective at all, isn't plausible either. It's reasonable that the brain would produce a first person perspective as it thinks about its situation. (And you don't have extend this to objects that don't think, by the way, if you were thinking of doing that.)
But I'm still left with the impression that there's an unanswered question which this one was only standing in for. The question is probably "what is thinking, anyway?".
Or, something quite different: "Why don't I have the outside observer point of view?". It's somehow difficult to accept that when there are many points of view scattered across space (and time), you have a specific one, and don't have all of them: "why am I not omniscient?". It's egotistical to expect not to have a specific viewpoint, and yet it seems arbitrary (and thus inexplicable) that you do have one. But again, the real question is not "why is this so?" but "why does this seem like a problem?".
Personally, my answer to “why am I me” is similar to the anthropic principle. If you were anyone else, you would be asking the exact same question, and if you were nobody, you would not be able to ask the question. By asking the question, you must necessarily be somebody, and the question would be the same no matter which somebody.
Your answer works when you are observing the person(s) from outside, referring to a third person (A and B are both conscious, so it doesn't matter which one is which). However, it doesn't answer when one of the subjects is "I", because I and everyone else is clearly different (hence the title of Benj Hellie's paper I linked above: Against Egalitarianism):
> The ‘god’s eye’ point of view taken in setting up the egalitarian metaphysics does not correspond to my ‘embedded’ point of view ‘from here’, staring out at a certain computer screen. The god’s eye mode of presentation of the Hellie-subject and the embedded mode of presentation of myself are different: as different as the manifest and scientific modes of presentation of water—indeed, perhaps even more so: that is the core of the Humean worry. So it is not a priori that any of those subjects is exactly the same thing as me. And if not, if I am told that it is this one that is me, I want to know why that is.
> Why does physical brain activity produce a first-person perspective at all?
I agree that this question is mysterious and fascinating, I just don't think the question of forking your consciousness bears on it at all.
The fact that first-person perspective exists is probably the fact that I am most grateful for out of all the facts that have ever been facts.
But I don't have any difficulty imagining forking myself into 2 copies that have a shared past and different futures.
I don’t understand how this refutes physicalism. Only my eyes are hooked up to my brain. If you duplicate the whole system there would be a duplicate that would begin experiencing its own version of reality.
> I don’t understand how this refutes physicalism.
Maybe it doesn't and there is a plausible explanation, that's why it has been an unanswered question. But it's definitely an astonishing question.
You instincitively say that even if you duplicate the whole system "you" would remain as "you" (or "I", from your point of view), and the replica would be someone else. In this context you claim that there is a new consciousness now, but there was supposed to be one, because our initial assumption was consciousness == brain.
You are right if you define consciousness as being able to think, but when you define it as what makes you "you", then it becomes harder to explain who the replica is. It has everything (all the neurons) that makes you "you", but it is still not "you".
The above may not make sense as it is difficult for a layman such as me to explain the vertiginous question to someone else. I suggest you to read the relevant literature.
Say I walk into a machine, and then I walk out, and also an exact duplicate walks out of a nearby chamber. My assumption is that we’d both feel like “me”. One of us would have the experience of walking into the machine and walking out again, and the other would have the experience of walking into the machine and being teleported into the other chamber.
Im probably lacking in imagination, or the relevant background, but I’m having trouble thinking of an alternative.
> My assumption is that we’d both feel like “me”.
You assume that both would feel like you, but there is no way you can prove it. The other can be a philosophical zombie [1] for all you know.
Would the "current you" feel any different after the duplication? Most people, including me, would find this counterintuitive. What happens if the other you travels to the other end of the world? What would you see? The question is not how the replica would think and act from an outside observer's perspective, but would it have the same consciousness as you. Would you call the replica "I"?
Or to make it more complex, what would happen if you save your current state to a hard disk, and an exact duplicate gets manufactured 100 years after you die, using the stored information?
[1] https://en.wikipedia.org/wiki/Philosophical_zombie
Like GP, I feel that I might be imagining imagination here, but I really don't follow what this is supposed to reveal.
>Would you call the replica "I"?
The two would start out identical and immediately start to diverge like twins. They would share memories and personality but not experience? What am I missing here?
I too don't get what's being missed.
I understand what the author means, though I struggle to express it as well. The best I can come up with is this: What defines I? Is it separated from "I" and if so how? Or does I merely appears that way because our perspective is informed by our limited being?
It seems to me that this ascribes an existence to “I” that is separate from the brain; with no evidence for this existence, that makes it mystical/magical thinking, a.k.a. superstition.
Not really. The "vertiginous question" is just that, a question. We can't call a question superstition because we don't have a good answer for it yet.
For example, we can't call the question "why does gravity exist" superstition either. It's a valid question. We can feel the gravity, measure it, and forecast it, therefore it exists, but we still don't have a concrete answer as to what causes it. We don't assume that there is a metaphysical explanation, but we don't know the actual answer either. Similarly, the vertiginous question is a meaningful question, even though we don't have an answer.
> Would you call the replica "I"?
Both of the replicas would refer to themselves as "I", but neither would refer to the other as "I".
Oh yes if the question is if the duplicate is also _me_ then I understand the concern. That’s a much more complicated question. But when it comes to perspective it’s easy to answer. Which I guess is literally what the wiki page says it makes more sense as you state it though.
Thanks for the additional explanation. I have read a good deal from Nagel to Chalmers and somehow missed this particular question.
> I have read a good deal from Nagel to Chalmers and somehow missed this particular question.
Chalmers' "Hard Problem" is very similar, although not exactly the same. My understanding is that it asks "why is there something called consciousness at all", as in, a robot doesn't have the notion of "I", but for some reason we do. The question is hard because it is hard to explain it only by our brains being more complex than a robot's CPU. Hellie's question is "why am I me and not someone else".
Yes, the two of you would see through two pairs of eyes, independently.
Both of you would be you, and you two would function separately, occupy separate spaces, and diverge slightly in ways that would only rarely make a difference to your personality.
But that's not the vertiginous question, which is "why am I me". I've wondered that before. However, it is nonsense. Naturally a person is that person, not some other person (and a tree is a tree, not some other tree). There's nothing strange about this. Why would it be otherwise? So the urge to ask the question really reveals some deep-seated misconception, or some other question that actually makes sense, and I wonder what that is.
I wonder if the origin of the question is the religious idea of a separate immortal soul which popped into this body and not into some other body - but in some way could have. This concept is in popular discourse like “what if I had been born in Italy in 1420?!” as if that were a thing thats plausible - an “I” separate from this body/place/time/life experiences/memories/language/family/etc but somehow still ‘me’.
Boring materialism view is that a brain with genetics mixed from my parents and raised in the way I was raised, with the experiences I had here and in this time, is what makes “me” and I couldn’t be anywhere or anyone else.
Or another way, we are all everyone else - what it would be like if I was born to your parents and raised like you is … you. What you would be like here is… me.
Well, if I were you, I wouldn't worry about it.
> What would make you (the original) different from your replica?
You’d be in two different locations, have independent experiences, and your world lines would quickly diverge. Both of you would remember a common past.
How do you know when you wake up in the morning that you are the same “I” as you remember from the previous day? Who isn’t to say that the universe didn’t multiply while you were asleep, and now there are two or more of you waking up?
(You don’t actually need to go to sleep to do this: https://cheapuniverses.com/)
I think this is what severance is about.
It would be a fork. Identical experience until that point but bifurcated from the point of fork since it no longer occupies the same physical space
New commits.
It's quite obvious given all available information consciousness arises from the brain. When someone talks as if it doesn't arise from the brain they are not choosing the most rational and obvious hypothesis. They are most likely trying to scaffold an explanation to fit a more biased spiritual world view where consciousness comes from this made up thing called spirit. Usually these people believe in something called religion which is an old world view of made up stories created in a time where humanity didn't understand things as much.
Don't push the argument. It's not coming from a place of rationality even though he's deliberately not using the word "spirit".
It's not Steven Pinker's claim alone. Gazzaniga agrees, I think, and I know of one other prominent neuroscientist but don't remember his name. Pinker is "just" a psychologist.
(Edit: Michael Graziano is who I was trying to remember - he uses the words "schematic" and "model")
Your view is called "pan-psychism". It's interesting, but there isn't anything that makes it necessary. Everything we're finding out is that most or all thinking happens outside of consciousness, and the results bubble up into it as perception. Consciousness does seem to be universal within the brain, though.
I find pan-psychism interesting just because of its popularity - people want something spiritual, knowingly or not. I would advise not to insist that consciousness==soul, however, as neuroscience seems to be rapidly converging on a more mundane view of consciousness. It's best to think of one's "true" self according to the maxim that there is much more to you than meets the mind's eye.
Or, people are spiritual, and realize it to different degrees. It's very easy to get confused about what we know and don't know on these subjects.
This would imply that the behavior of elementary particles in the brain (which ultimately cause our observable behavior via nerve signals and muscle movements, including the texts we are typing or dictating here) differs from the one predicted by the known physical laws. That’s difficult to reconcile with the well-confirmed fundamental physical theories, and one has to wonder why nobody tries to experimentally demonstrate such known-physical-laws-contradicting behavior. It would be worth at least one Nobel Prize.
Secondly, it wouldn’t really explain anything. The “consciousness field” would presumably obey some kind of natural laws like the known fields do, but the subjective experience of consciousness would remain as mysterious as before (for those who do find it mysterious).
I cannot see how one might perform an experiment to determine which concept is correct. As with most things which are unfalsifiable, the idea can be amusing for a bit but is ultimately not useful to the extent that you can do anything about it. You cannot serve tea from Russell's Teapot.
If the brain is a receiver, information transfer could happen non-locally and the tea might be telepathy, precognition, or remote viewing. In the split brain example, demonstrating an ability to coordinate between hemispheres in ways not predicted by neural separation might challenge the physical origin of consciousness as with the chicken and shovel anecdote.
Experiments demonstrating an external source of consciousness would be very interesting.
Not a teapot in this case!
Ah, no.
Suppose you do all kinds of studies and not show any telepathy, precog, or remote viewing. You could still say that the brain was only a receiver. None of that would disprove the "brain-as-consciousness-receiver" concept, you would just say that, I guess it is one way, no telepathy.
It's not disprovable. And so, kind of boring.
Or communicate telepathically with dogs.
Yep, some unfinished philosophy if you're into it: you can imagine that our universe at a moment of time has is just a giant geometric shape, then at the next moment the universe somehow changes into the this new shape. How does this change happen? Some believe it's a computation according to a rule/s, some that it's not a discrete change but a continuous equation that changed the shape of the universe from one to another. Basically you can imagine the whole universe as a long-exposure photography in 3d and then there is some process that "forgets" almost all of it leaving only slim slices of geometry and changing from one slice into another. This forgetting of the current slice and "recalling" the next, is consciousness, the time-like process. And it looks like the Big Bang was like matter converted to energy (or "space converted to time") process. The final falling into a giant black hole will be the reverse: energy converted to matter (or "time converted to space"). Some say electrons are like small black holes, so we potentially experience the infinitesimal qualia of coming into existence and coming out of existence, because we are sufficiently "time-like" and not too much "space-like". I'll soon write a blog post ;)
Descartes was pretty much on the same page.
This is dualism, no.
It's not a dualism at all. What the OP is proposing is similar to Spinoza (probably the most hardcore monist to ever exist), where mind is a fundamental property of the universe (in fact, there's only one mind) and each individual person is a 'mode' of it.
It's effectively akin to talking about mass. Despite the fact that mass is observable as a distinct phenomenon in any object, it's obviously not accurate to say that you "produce mass" or that it's "your mass" in some private, ontologically separated way, it just appears that way, by definition if we look at particular manifestations of it.
The idea that the brain functions as a sort of radio capturing a consciousness field makes the most sense to me and also feel comforting in some way
However, “makes sense to me” and “feels comforting” has no bearing on whether it's true.
I've had numberous LLMs tell me that humans are conscious because we are like radio receivers, picking up a single consciousness field of the universe itself.
So that's very interesting that you mention that.
Looks like this was one of the inspirations behind severance.
It's certainly an inspiration for the Zizian cult, a group of vegan computer programmers currently being investigated by US authorities for a string of murders across the US. [0, 1]
> LaSota believed that humans have two minds, a left and right hemisphere, and each hemisphere can be good or evil, according to posts on her blog. Eventually, LaSota came to believe that only very few people — she among them — are double good. [1]
[0] https://www.usatoday.com/story/news/nation/2025/02/19/zizian... [1] https://www.nbcnews.com/news/us-news/german-math-genius-get-...