Talk:Synthetic consciousness

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Proposed TOC[edit]

Background and History[edit]

  • Making the distinction between synthetic consciousness and artificial intelligence
  • Identifying why research into synthetic consciousness is interesting/important/compelling, and including:
    • one of the main raison d'etres of academic research is to use models of synthetic consciousness to gain insights into the nature of human consciousness (and not vice versa as I think many people assume)

Early implementations[edit]

Successes[edit]

Failures[edit]

Current projects[edit]

Applications[edit]

The future[edit]

The future - in science[edit]

The future - in science fiction[edit]

Discussion[edit]

Can we start by using citation in every instance, in order to avoid, the sort of problems that beset the other attempt? Matt Stan 15:34, 4 May 2004 (UTC)[reply]

So...

The term synthetic consciousness is preferred here to simulated consciousness because anything simulated is by definition not real. Some consciousness researchers (e.g. ...) believe that a synthetic consciousness will never be really conscious. Others (e.g. ...) believe that possibly some synthetic consciousness may one day be really conscious. A few (e.g. ...) believe that some synthetic consciousness is already really conscious. We may, some day, not be able to distinguish between a sufficiently sophisticated simulated consciousness and a natural real consciousness (says who?): By Leibniz's law the simulated duck, quacking like a real duck, might be considered as if it were real. But one might speculate that Leibniz's law could not be applied to a synthetic consciousness which was convincingly conscious but was of a type of consciousness unknown until that point. Whatever! The term synthetic consciousness is an inclusive term for the purposes of this Wikipedia article which allows the discussion to proceed constructively. Matt Stan 15:42, 4 May 2004 (UTC)[reply]

The sentence starting "We may" is incontrovertible. You could change it to "we may or may not" if you like. Paul Beardsell 15:49, 4 May 2004 (UTC)[reply]
I think it is an inclusive term absolutely: Synthetic includes simulated. Paul Beardsell 15:51, 4 May 2004 (UTC)[reply]

On the point "...which was convincingly conscious but was of a type of consciousness unknown until that point". By what yardstick would one deem it to be convincingly conscious other than by comparison with known instances of consciousness, in which case surely it's back to Leibniz? Matt Stan 15:42, 4 May 2004 (UTC)[reply]

Leibniz only applies in identical situations. Leibniz is a yardstick which would only recognize a human-like consciousness. This might answer the question wrongly as to whether the aliens pouring out of the spaceship to envelop you with love^H^H^H^Hgoo would be conscious. Paul Beardsell 15:49, 4 May 2004 (UTC)[reply]
Yes, Leibniz acts as a constraint. If I see a creature that is unlike any other I cannot call it a duck (or, at least, I could, but it wouldn't be a duck). So I encounter a phenomenon that I am trying to assess to see whether I can call it consciousness. If I can apply Leibniz law appropriately then there's no problem. I meet someone called Paul: he walks; he talks; he uses the first-person pronoun; he actually looks a bit like me and shares much of my morphology - if I hooked him up to a brain scanner I'd no doubt see patterns similar to the ones I'd see if I hooked myself up in the same way. Ergo I conclude that he's conscious (assuming I believe that I am). Leibniz doesn't, incidentally, stipulate to what lengths I should go in order to falsify any hypothesis, but provided I apply all the criteria I know, I reasonably conclude at some stage that Paul meets my paradigm of consciousness. Someone else could come along later and prove, again by Leibniz law, that Paul had some deficiency or other difference that somehow made me change my mind and conclude that he was not conscious after all - just an ordinary application of scientific method. Now Leibniz's law obviously doesn't apply if I can't make suitable comparisons. I'm unclear how one would assess consciousness in that circumstance. Matt Stan 18:58, 4 May 2004 (UTC)[reply]
Should the green six legged alien converse with you directly by telepathy whenever you gazed at its luminous eye then you might consider this a persuasive demonstration of consciousness which would fail the Leibniz test. As ever we have the necessary vs sufficient contrast involved in any test. Passing acording to Leibniz in comparison with humans consciousness is sufficient to persuade us of consciousness, but it is not necessary: We may apply a less demanding but also sufficient test. When we find one! Paul Beardsell 16:06, 4 May 2004 (UTC)[reply]
I don't think your example would necessarily fail the Leibniz test. I would only conclude that the alien was conscious if I had been able to compare his manifestation with my own understanding of what constituted consciousness. I can't easily say what attributes I would use to make that comparison. If I hadn't seen a bird and didn't know whether it quacked (were I to see it) then I couldn't conclude that it wasn't a duck. I have to see it first. Same with the alien. There are obviously grey areas, and the pony example is one. You say you think a pony is conscious; others would disagree. Each is applying the Leibniz test to a different degree and people will have to agree to differ. I am just saying that if one found a manifestation to which one could apply Leibniz law, e.g. a machine that behaved axactly like a human, then there would be no reason to deny its consciousness, so Leibniz law is useful in a limited way. One would have to find other criteria to demonstrate consciousness for manifestations that failed the Leibniz test. OK? Matt Stan 18:58, 4 May 2004 (UTC)[reply]
Some think Leibniz's law is only applicable in identical situations. See second para under Leibniz law. How that would cope with a six-legged, one-eyed, telepathic green alien I am unsure. Paul Beardsell 20:22, 4 May 2004 (UTC)[reply]
I think you are right, unfortunately! Leibniz's law would have been a useful catch-all, but, as the lerned contributors to conciousentities point out, it can only effectively by used in a symbolic context. Matt Stan 11:54, 5 May 2004 (UTC)[reply]

Matt Stan's "no content" preference[edit]

Matthew, if you do not want any content here then say so. But the article at the point that I first saw it was not without POV. If you don't want to explain why it's synthetic not simulated don't do so. Why mention Liebniz at all? Paul Beardsell 16:28, 4 May 2004 (UTC)[reply]

Having decided that synthetic was better than simulated it did occur to me that there was no need even to mention simulated and hence no need to explain the term, as it doesn't occur elsewhere. I could just as well have put "we're calling is synthetic consciousness rather than unicorn custard because it is uncertain whether unicorns like custard, or alternatively whether there is a recipe for making custard out of unicorn parts". Matt Stan
OK, so that is just another way of saying you agree. The point is that the explanation you used re synthetic vs simulated was dripping with the weak-AC POV. Paul Beardsell 18:03, 4 May 2004 (UTC)[reply]

I thought that Leibniz admirably summed up the point I was trying to make under test acceptance criteria on the other page and is, I think, vital for a true assessment of any implementation. It's the meta-Turing test, if you like. If I (or you, or anyone else) can't tell the difference between a machine implemented consciousness and a naturally occurring consciousness then one must conclude that the frmer is real (assuming the latter is real).

I fully understood that... Paul Beardsell 18:03, 4 May 2004 (UTC) ...the first time but I disagree with it. You have simply found a test DESIGNED to fail non-human consciousness. Paul Beardsell 19:32, 4 May 2004 (UTC)[reply]

However, you raise another semantic point. If the little green men from outer space are deemed conscious (by whatever criteria) then that's no guarantee that that consciousness is synthetic. paragraph break inserted

But then you are in an even weaker position: Using Liebniz you deny the genuinely conscious aliens consciousness. Using Liebniz (as you insist) the aliens would have synthesized a consciousness like theirs to populate their spaceships, if they decide not to come in person, so to speak. You will recognise no consciousness other than one which looks like yours. We have been at this anthropomorphic stand off before. And the aliens came to my rescue before. Either we are alone in the Universe; or we are incapable of recognising aliens as conscious; or their consciousness is just like ours, Liebniz-like. Or non-human like consciousness is possible and at least some examples of that will be recognisable as such by us. Liebniz is sufficient but not necessary. Paul Beardsell 18:03, 4 May 2004 (UTC)[reply]
No I'm not denying any genuineness here. I don't deny the genuinely conscious aliens their consciousness - I only question how one would determine whether they were conscious at all. Both naturally-occurring and synthetic consciousnesses are genuine; I was questioning how to determine whether the consciousness arose naturally or whether it was synthesised. If we are just talking about validating consciousness then the consciousness page is the place for that. I am intending that we are here talking about how to synthesise consciousness, which includes how to assess whether it has been actualised, but not how to tell whether it has been synthesised (i.e. by intent) or whether it arose naturally (i.e. without intent). Matt Stan 18:58, 4 May 2004 (UTC)[reply]
In a SF story I once read friendly aliens come to Earth and ask to be shown the richness of Earth culture. The lucky human with whom they had made contact was a talented painist and played his heart out. Bach and Beethoven did not impress the aliens at all and they claimed to have a much richer music-like experience in their culture. The human begged to be shown this but they initially refused saying that after 5mins his receptor circuits would be blown and he would always miss the experience he had had. He persuaded them to wire him up. After the 5mins were up he begged the aliens to play him some more and they did but he could "hear" nothing. He was disconsolate. He played the piano to cheer himself up but he was now as unimpressed with Bach and Beethoven as the aliens had been. Paul Beardsell 19:29, 4 May 2004 (UTC)[reply]
Human consciousness might be limited but a higher non-human consciousness might find a way of communicating with us. You say you deny nothing but I disagree. And what a poor view you must have of dogs and cats. Paul Beardsell 19:29, 4 May 2004 (UTC)[reply]

Synthetic consciousness, by definition, must surely be limited to the results of some synthesis process which, I am assuming, is distinct from a naturally occurring biological or quasi-biological process (quasi-biological being my term for naturally occurring lifeforms not of this planet). Now without knowledge of the processes involved in the creation of the little green men it would not be possible to know whether they were examples of synthetic or natural consciousness, in which case surely all one could do is perhaps to deem them conscious, without regard to how that consciousness arose. So, the topic of synthetic consciousness must be limited to attempts to implement consciousness that are engineered as a result of come intent rather than to instances of consciousness that arise by other means. (I am discounting Richard Dawkins' idea here that humans are the means by which genes replicate, with the underlying assumption that genes intend to do this. I am assuming that genes are unconscious, even though they give rise to consciousness.) That's not to say that one couldn't synthesise types of consciousness which are hitherto unfamiliar, but to do so would perhaps require a broader definition of consciousness itself.

As I have shown: Irrelevant to the strange-consciousness issue. However interesting it is otherwise. Paul Beardsell 18:03, 4 May 2004 (UTC)[reply]

Regarding content, I can't restrict that. I thought it unnecessary to duplicate the material on the other page without perhaps deleting it from there. I think there is a good case for a rewrite from scratch, with an article that is planned/designed on the talk page here and written in bullet-proof manner. How about starting with a table of contents? (G.T.K.T.E.H) Matt Stan 17:31, 4 May 2004 (UTC)[reply]

No you cannot restrict content. But that I am "wrecking" it does not follow. I have also copied no material from any other page. Paul Beardsell 18:03, 4 May 2004 (UTC)[reply]
Jolly good! I'm glad to see you're K.T.E.H. Matt Stan 18:58, 4 May 2004 (UTC)[reply]
Explain. Paul Beardsell 19:29, 4 May 2004 (UTC)[reply]
[1] Matt Stan 11:15, 5 May 2004 (UTC)[reply]

What to avoid[edit]

I'd like to avoid interminable discussions about weak vs stong implementations, with an emphaisis on the transitive sense of consciousness, i.e. consciousness of various phenomena, rather than the philosophical debate about whether machine consciousness is possible per se, as this is well-covered elsewhere in wikipedia, e.g. at artificial intelligence. I realise this limits the scope of the article, but this is intentional. I think the spring board for this notion is that an implementation needs to be pro-active in some respect. Perhaps a good example is an alarm clock. A clock is aware of the time - it's consciousness, if that is the correct term, springs into life when the alarm goes off for the purpose of waking me up on time. Now, most would argue that conscious is not the correct term here. But what about an alarm clock that decided for itself what time to wake me up, based on inputs that it possessed, say, by noticing whether I was in my bed, by knowing what day of the week it was, by measuring how much sleep I had had recently, by weighing my unopened mail, by listening to whether my cats were hungry, and so on? Such an alarm clock might qualify as being synthetically conscious of my sleep needs, and my relationship with it would lead me to consider it as conscious as if I had a man-servant to perform the same function. Although I fabricated the above example, there are robotic machines that perform useful functions based on awareness of their environment and on drives that are in-built. These are the models for synthetic consciousness that I have in mind here. Note also the constraint imposed by the use of synthetic in the title. Here I aim that synthetic consciousness means built with an intent, rather than naturally occurring, which arises spontaneously as a result of, say, a biological process. This disqualifies any notions of super-human consciousness, perhaps such as the all-pervading consciousness portrayed in the film The Matrix. Matt Stan 09:14, 6 May 2004 (UTC)[reply]

I very much support this approach. It lets the poor little alarm clock be built (isn't it cute?). I will know that the conscious alarm clock is really conscious and other (perhaps more rational) people will know it merely simulates a conscious alarm clock. Where is Leibniz when you need him. Essentially, what you are saying is, discussion of the reality or otherwise of the apparent consciousness is BANNED. Here. Paul Beardsell 16:10, 7 May 2004 (UTC)[reply]

You raise the interesting point that one's perception of an entity's consciousness might be a function of one's relationship with that entity. Children imbue their dolls with consciousness and talk to them as though they were real people. Religious people do likewise with their gods and animistic religious with various artifacts and natural objects. This means that an assesment of any implementation is not going to be child's play, as Paul illustrates with the conscious alarm clock above. The implication is that a conscious machine might be genuinely conscious as far as some people were concerned, but not as far as others are concerned. I'm not sure how one would reach consensus on this issue, without getting into an 'Oh no, it's not' / 'Oh yes, it is' - type argument. It may be that rationality has nothing to do with it but the marketing law would apply. The marketing law is "The truth is what the people believe the truth to be." In fact, I'm surprised people aren't already using synthetic consciousness as a marketing ploy for all sorts of gadgets. Perhaps it's because the common perception of machines that seem too smart is that they're a bit scary, and therefore it's better for the marketeers to play down any notion that the thing might actually be conscious. I remember being surprised, and delighted, when my new car some while ago complained (by beeping) that I'd left the lights on when I got out. There was nothing in the manual about this feature, and it did seem that the car really cared about its battery not going flat. Now I know that the car didn't really care, but it seemed so, an I was happy to go along with the idea of a car that had consciousnesses of things that I was not even aware it was conscious of. The BMW that stays in lane is a better example. Here is a car with a will to avoid crashing even when the driver goes to sleep (i.e. loses consciousness). In that event the car itself can take over an aspect of the driving. What's that, if not an implementation of synthetic consciousness? Matt Stan 08:12, 11 May 2004 (UTC)[reply]


From WP:VFD[edit]

Wikipedia:Votes for deletion/synthetic consciousness

Redirect[edit]

Well, it's been over a year since the last discussion here and there's been nothing done with this article, whereas Artificial_consciousness has grown quite large. Does anyone still object to this page redirecting there? — Schaefer 05:05, 14 August 2005 (UTC)[reply]