Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Tenoke 30 May 2013 12:30:44AM *  0 points [-]

FWIW I already summarized this study in a LessWrong comment 3 weeks ago.

Comment author: dvasya 30 May 2013 12:07:36AM 0 points [-]

Indeed anosognosia is mentioned multiple times in the paper, perhaps serving as the motivation.

Comment author: Zaine 29 May 2013 11:59:50PM 0 points [-]

Even with the risk assessment metric based upon financial concerns, water to the ears may still trigger feelings of vulnerability - can anyone think of a way to mitigate this confound? Though I am curious why this effect would be more pronounced in the right hemisphere.

Comment author: gothgirl420666 29 May 2013 11:56:57PM 0 points [-]

I read somewhere on here that it also makes you vomit.

Comment author: Eliezer_Yudkowsky 29 May 2013 11:53:31PM 0 points [-]

My understanding is that any speedup would be fairly implausible, I mean isn't the whole lesson of l'affaire D-Wave that you need maintained quantum coherence and that requires quantum error-correction which is why Scott Aaronson didn't believe the D-Wave claims? Or is that just an unusually crisp human-programming way of doing things?

Comment author: Qiaochu_Yuan 29 May 2013 11:51:30PM 1 point [-]

Squirting water into ears has come up on LW before in connection with anosognosia, e.g. The Apologist and the Revolutionary. So this is at least somewhat consistent with my model of reality.

Comment author: jsteinhardt 29 May 2013 11:47:59PM 0 points [-]

I'm not sure where the line would be drawn; I think it's possible that neurons are getting speedups by exploiting quantum effects. I don't think it's using it to solve problems that aren't in P.

Comment author: dvasya 29 May 2013 11:44:51PM 0 points [-]

Maybe if somebody came up with a nice self-experiment protocol...

Comment author: lukstafi 29 May 2013 11:44:27PM *  0 points [-]

Another point is that value (actually, a structure of values) shouldn't be confused with a way of life. Values are abstractions: various notions of beauty, curiosity, elegance, so called warmheartedness... The exact meaning of any particular such term is not a metaphysical entity, so it is difficult to claim that an identical term is instantiated across different cultures / ways of life. But there can be very good translations that map such terms onto a different way of life (and back).

Comment author: jsteinhardt 29 May 2013 11:43:31PM 0 points [-]

You missed the key word "computationally". Of course a pseudorandom generator is a mathematically distinct object, but not in a way that the universe is capable of knowing about (at least assuming that there are cryptographic pseudorandom generators that are secure against quantum adversaries, which I think most people believe).

Comment author: jsteinhardt 29 May 2013 11:38:57PM 1 point [-]

Yes, and a pretty weird feature at that (being in BQP but not P is pretty odd unless BQP was designed to contain the problem in the first place).

Comment author: jsteinhardt 29 May 2013 11:37:18PM 1 point [-]

Low sample size, not reproduced (unless I'm wrong?), unclear that results would generalize even if true. I'm not sure it's fruitful to pay attention to such studies.

Comment author: Larks 29 May 2013 11:02:20PM 0 points [-]

Nor was I in fact making any arguments - I was simply stating the position. It's been a few years since I've studied epistemology, so I wouldn't trust myself to do so. SEP is normally a good bet, and I seem to recall enjoying Nozick (Philosophical Investigations) and the Thermometer Model of Knowledge.

I don't recall being convinced by any of the Externalist models I studied (Relevant Possible Alternatives, Tracking, Reliablism, Causal and Defeasability accounts) but I think something in that ballpark is a good idea. Externalism has been, in general, a very successful philosophical project, in a variety of areas (e.g. content externalism).

Also, I hate to say it, but I think you would be better off ignoring everything that has been said on this thread. LW is good for many things, but its appreciation of academic philosophy is frankly infantile.

Comment author: itaibn0 29 May 2013 10:48:14PM 0 points [-]

I prefer A. The paperclipping AI will need to contemplate many interesting and difficult problems in physics, logistics, etc. to maximize paperclips. In doing so it will achieve many triumphs I would like a descendant of humanity to achieve. One potential problem I see is that the paperclipper will be crueler to intelligent life in other planets that isn't powerful enough to have leverage over it.

Comment author: Juno_Watt 29 May 2013 10:43:53PM 0 points [-]

You mean there are ideas no philosopher has contemplated?

Comment author: paulfchristiano 29 May 2013 10:29:39PM *  0 points [-]

As a pedantic note, if you want to derandomize algorithms it is necessary (and sufficient) to assume P/poly != E, i.e. polynomial size circuits cannot compute all functions computed by exponential time computations. This is much weaker than P != NP, and is consistent with e.g. P = PSPACE. You don't have to be able to fool an adversary, to fool yourself.

This is sometimes sloganized as "randomness never helps unless non-uniformity always helps," since it is obvious that P << E and generally believed that P/poly is about as strong as P for "uniform" problems. It would be a big shock if P/Poly was so much bigger than P.

But of course, in the worlds where you can't derandomize algorithms in the complexity-theoretic sense, you can still look up at the sky and use the whole universe to get your randomness. What this means is that you can exploit much of the stuff going on in the universe to do useful computation without lifting a finger, and since the universe is so much astronomically larger than the problems we care about, this is normally good enough. General derandomization is extremely interesting and important as a conceptual framework in complexity theory, but useless for actually computing things.

Comment author: nyan_sandwich 29 May 2013 10:22:47PM 0 points [-]

post narcissism

I knew exactly what you were referring to just from the title.

ie. I get that too, and quite badly.

It is most acute immediately after posting, with time length proportional to the emotional mass of the post. Therefore, I should strategically post big things so that I'm not working on something else important right after.

For example this sunk me for the second half of Sunday, and probably monday as well.

Comment author: Juno_Watt 29 May 2013 10:19:08PM 0 points [-]

The apex of this is represented by Dr. Sheldon Cooper, who is, essentially, a complete fundamentalist over every single thing in his life; he applies this attitude to everything, right down to people's favorite flavor of pudding: Raj is "axiomatically wrong" to prefer tapioca, because the best pudding is chocolate.

Behaviour not confined to fiction..

Comment author: KnaveOfAllTrades 29 May 2013 10:02:38PM *  0 points [-]

You need to clarify your intentions/success criteria. :) Here's my What Actually Happened technique to the rescue:

(a) You argued with some (they seem) conventional philosophers on various matters of epistemology.
(b) You asked LessWrong-type philosophers (presumably having little overlap with the aforementioned conventional philosophers) how to do epistemology.
(c) You outlined some of the conventional philosophy arguments on the aforementioned epistemological matters.
(d) You asked for neuroscience pointers to be able to contribute intelligently.
(e) Most of the responses here used LessWrong philosophy counterarguments against arguments you outlined.
(f) You gave possible conventional philosophy countercounterarguments.

This is largely a failure of communication because the counterarguers here are playing the game of LessWrong philosophy, while you've played, in response, the game of conventional philosophy, and the games have very different win conditions that lead you to play past each other. From skimming over the thread, I am as usual most inclined to agree with Eliezer: Epistemology is a domain of philosophy, but conventional philosophers are mostly not the best at—or necessarily the people to go to in order to apprehend—epistemology. However, I realise this is partly a cached response in myself: Wanting to befriend your coursemates and curry favour from teachers isn't an invalid goal, and I'd suspect that in that case you wouldn't be best be served by ditching them. Not entirely, anyway...

Based on your post and its language, I identify at least the three following subqueries that inform your query:

(i) How can I win at conventional philosophy?
(ii) How can I win by my own argumentative criteria?
(iii) How can I convince the conventional philosophers?

Varying the balance of these subqueries greatly affects the best course of action.

If (i) dominates, you need to get good at playing the (language?) game of the other conventional philosophers. If their rules are anything like in my past fights with conventional philosophers, this largely means becoming a beast of the 'relevant literature' so that you can straightblast your opponents with rhetoric, jargon, namedropping, and citations until they're unable to fight back (if you get good enough, you will be able to consistently score first-round knockouts), or so that your depth in the chain of counter^n-arguments bottoms them out and you win by sheer attrition in argumentdropping, even if you take a lot of hits.

If (ii) dominates, you need to identify what will make you feel like you've won. If this is anything like me in my past fights with conventional philosophers, this largely means convincing yourself that while what they say is correct, their skepticism is overwrought and serves little purpose, and that you are superior for being 'useful'.

If (iii) dominates, the approach depends upon of what you're trying to convince them. For example, whether the position of which you're trying to convince them is mainstream or contrarian completely changes your argumentative approach.

In the case of (d), the nature of the requested information is actually relatively clear, but the question arises of what you intend to do with it. Is it to guide your own thinking, or mostly to score points from the other philosophers for your knowledge, or...? If it's for anything other than improving your own arguments by your own standards, I would suggest (though of course you have more information about the philosophers in question) that you reconsider how much of a difference it will make; a lot of philosophers at best ignore and at worst disdain relevant information when it is raised against their positions, so the intuition that relevant information is useful for scoring points might be misguided.

Where you speak of seeing yourself shifting/having shifted and moving away from an old position (foundationalism) or towards a new one (coherentism) and describing your preference for foundationalism as irrational, it seems like you probably should just go ahead and disavow foundationalism. Or at least, it would if I were confident such affiliations were useful; I'm not. See conservation of expected evidence.

Comment author: Eliezer_Yudkowsky 29 May 2013 09:44:46PM 1 point [-]

Okay, makes sense if you define "distinguishable from random" as "decodable with an amount of computation polynomial in the randseed size".

Comment author: DSherron 29 May 2013 09:39:02PM 2 points [-]

Um, sorry, but seriously?! Arguing about definitions of words? This is entirely ridiculous and way below the minimum rationality that should be expected from posts on Less Wrong. Downvoted for proposing serious discussion of a topic that deserves no such thing. Since you seem sincere I'll try and give you a quick overview of the problems here, but you really need to reread the sequence "A Human's Guide to Words" to get a full picture.

First, while I have an answer to what the useful definition of evidence is (in the sense that it describes a useful feature of reality), I will refrain from pointing it out here because it is irrelevant to the topic at hand. If someone really needed the word "evidence" for some reason, including potential hypothesis-favoring-data sufficient to convince me that most people mean something very different from me by the word "evidence", I'd be willing to give up the word. After all, words don't have meanings, they're just mental paintbrush handles for someone else's brain, and if that handle paints the wrong picture then I'll use a different one.

That said, the thrust of the problem with your post is exactly the same as the definitional dispute over a tree in a forest. There is no "true" meaning of evidence, and anyone arguing about one is doing so with an intent to sneak in connotations to win an argument by appealing to basic human fallacies. Definitional Disputes are an indisputably Dark Side tactic; the person doing might be honest, but if so then they are severely confused. Most people couldn't identify the difference between good and bad epistemology if it hit them in the face, and this does not make them evil, but it does make them wrong. Why would anyone care what the "true meaning" of evidence is, when they could just break down the various definitions and use each consistently and clearly? The only reason to care runs along the lines of "evidence is an important concept [hidden inference], this is the true definition, therefore this definition is important", replacing "important" with something specific to some discussion.

Only think about words as paintbrush handles, and the problem goes away. You can then start focusing on the concept behind your handle, and trying to communicate instead of win. Once you and your audience can all understand what is being said - that is, when the pictures you draw in their brain match the pictures in your head - then you're done. If you dispute anything at that point, it will be your true dispute, and it will have a resolution - either you have different priors, one or more of you is irrational, or you will walk away in agreement (or you'll run out of time - humans aren't ideal reasoners after all). Play Rationalist Taboo - what question is this post even asking, when you remove explicit reference to the word "evidence"? You can't ask questions about a concept which you can't even identify.

I feel like I've seen an increasing amount of classical Traditional Rationality bullshit on this site lately, mostly in Discussion. That could just be me starting to notice it more, but I feel like I need to make a full post about it that I can link to whenever this stuff comes up. These are basic errors, explicitly warned against in the Sequences, and the whole point of Less Wrong is supposed to be somewhere where this sort of crap is avoided. Apologies for language.

Comment author: niceguyanon 29 May 2013 09:21:34PM 0 points [-]

Post Narcissism: An absolutely intense eagerness to read your own posts and comments after you wrote them, accompanied by a feeling of flow while doing so,

I always thought this was more common than not, a less severe version at least. However I have the full blown case.

Comment author: Wei_Dai 29 May 2013 09:00:55PM *  6 points [-]

If you're not familiar with PRGs, distinguishers, advantage, negligible functions etc I'd be happy to Skype you and give you a brief intro to these things.

There are also intros available for free on Oded Goldreich's FoC website.

Here's my simplified intuitive explanation for people not interested in learning about these technical concepts. (Although of course they should!) Suppose you're playing rock-paper-scissors with someone and you're using a pseudorandom number generator, and P=NP, then your opponent could do the equivalent of trying all possible seeds to see which one would reproduce your pattern of play, and then use that to beat you every time.

In non-adversarial situations (which may be what Eliezer had in mind) you'd have to be pretty unlucky if your cognitive algorithm or environment happens to serve as a distinguisher for your pseudorandom generator, even if it's technically distinguishable.

Comment author: TheOtherDave 29 May 2013 08:57:41PM 0 points [-]

If we're going to be picky, also the idea that only neurons are relevant isn't right; if you replaced each neuron with a neuron-analog (a chip or a neuron-emulation-in-software or something else) but didn't also replace the non-neuron parts of the cognitive system that mediate neuronal function, you wouldn't have a working cognitive system.
But this is a minor quibble; you could replace "neuron" with "cell" or some similar word to steelman your point.

Comment author: Creutzer 29 May 2013 08:25:51PM *  1 point [-]

What exactly do you mean by "empiricism does not hold"? Do you mean that there are no laws governing reality? Is that even a thinkable notion? I'm not sure. Or perhaps you mean that everything is probabilistically independent from everything else. Then no update would ever change the probability distribution of any variable except the one on whose value we update, but that is something we could notice. We just couldn't make any effective predictions on that basis - and we would know that.

Comment author: ciphergoth 29 May 2013 07:54:24PM *  7 points [-]

You don't sound like you're now much less confident you're right about this, and I'm a bit surprised by that!

I got the ladder down so I could get down my copy of Goldreich's "Foundations of Cryptography", but I don't quite feel like typing chunks out from it. Briefly, a pseudorandom generator is an algorithm that turns a small secret into a larger number of pseudorandom bits. It's secure if every distinguisher's advantage shrinks faster than the reciprocal of any polynomial function. Pseudorandom generators exist iff one-way functions exist, and if one-way functions exist then P != NP.

If you're not familiar with PRGs, distinguishers, advantage, negligible functions etc I'd be happy to Skype you and give you a brief intro to these things.

Comment author: ESRogs 29 May 2013 07:40:17PM 2 points [-]

Could you flesh that out a bit? Is the idea that it's just one more case where a feature of our universe turns out to be necessary for consciousness?

Comment author: Eliezer_Yudkowsky 29 May 2013 07:32:24PM 1 point [-]

Can you refer me to somewhere to read more about the "usual definitions" that would make this true? If I know the Turing machine, I can compare the output to that Turing machine and be pretty sure it's not random after running the generator for a while. Or if the definition is just lack of expected correlation with bits playing a functional role, then that's easy to get. What's intermediate such that 'indistinguishable' randomness means P!=NP?

Comment author: ciphergoth 29 May 2013 07:27:36PM 3 points [-]

A proof that any generator was indistinguishable from random, given the usual definitions, would basically be a proof that P != NP, so it is an open problem. However we're pretty confident in practice that we have strong generators.

Comment author: OrphanWilde 29 May 2013 07:22:13PM 0 points [-]

since they all disagree with each other and therefore can't all be wrong.

Doesn't follow.

Comment author: Juno_Watt 29 May 2013 07:04:42PM *  0 points [-]

OK, it has been established that you attach True to the sentence:

"Philosophers are not judged based on whether their claims accurately describe the world".

The question is what that means. We have established that philosophical claims can be about the world, and it seems uncontroversial that some of the make true claims some of the time, since they all disagree with each other and therefore can't all be wrong.

The problem is presumably the epistemology, the justification. Perhaps you mean that philosophy doesn't use enough empiricism. Although it does use empiricism sometimes, and it is not that every scientific question can be settled empirically.

Comment author: DanielLC 29 May 2013 06:54:17PM 0 points [-]

It's a quantum effect, but it's one that's easily taken advantage of, as opposed to the crazy difficult stuff a quantum computer can do. As such, a computer that can do that can be considered classical.

For that matter, transistors work by exploiting quantum effects. We still don't call them quantum computers.

Comment author: DanielLC 29 May 2013 06:50:22PM 0 points [-]

You don't need a quantum computer to exploit quantum effects for random number generation. I've heard it's common to do that by sending electricity backwards through a diode and amplifying it.

Comment author: DSherron 29 May 2013 06:29:39PM 1 point [-]

It was almost certainly not supposed to argue against utilitarianism in general. It argues against the typical mind fallacy mostly; universal hedonic utilitarianism is just one particularly stupid incarnation of that, along with any other value system that arbitrarily values things valued by other minds without restraint.

Comment author: ozziegooen 29 May 2013 06:28:32PM 2 points [-]

This makes me angry on so many levels. (Spoilers Below)

  • "We believe it is the goal of every species to behave with the best ethics possible." -> Who would say that? If only intelligent beings, is the argument that every single being a utilitarian, or only a small important subset? Do we argue that apes should be utilitarians? What do they mean the "goal"; isn't the benefit of the ethics more the "goal" then the means are?

  • 'The best ethics? It's like... trying to maximize how happy everyone is." Utilitarianism is here explained by someone who barely seems to understand utilitarianism. This is a straw man, that's not fair. Any decent enthusiast should at least provide a much more succinct response, and I imagine that in the future we should have a much better understanding of it as well.

  • The alien plays sarcastic in the end, saying it's "highest behavior" is "smugly tolerating inferior species". I'm curious what Zach could have put their as a realistic possible example.

Short answer, it's possible to make anything look silly when you get idiots to represent it.

Also, note that if we actually were at a point as a species where utilitarianism were the de-facto philosophy, my guess is that the world would look quite different. And I agree with Tuxedage on hedonistic utilitarianism vs. utilitarianism here.

Comment author: Eliezer_Yudkowsky 29 May 2013 06:22:30PM 3 points [-]

What? No it's not. There are no pseudo-random generators truly ultimately indistinguishable in principle from the 'branch both ways' operation in quantum mechanics, the computations all have much lower Kolmogorov complexity after running for a while. There are plenty of cryptographically strong pseudo-random number generators which could serve any possible role a cognitive algorithm could possibly demand for a source of bits guaranteed not to be expectedly correlated with other bits playing some functional role, especially if we add entropy from a classical thermal noise source, the oracular knowledge of which would violate the second law of thermodynamics. This is not an open problem. There is nothing left to be confused about.

Comment author: Eliezer_Yudkowsky 29 May 2013 06:18:20PM 3 points [-]

Give up on justifying answers and just try to figure out what the answers really actually are, i.e., are you really actually inside an Evil Demon or not. Once you learn to quantify the reasoning involved using math, the justification thing will seem much more straightforward when you eventually return to it. Meanwhile you're asking the wrong question. Real epistemology is about finding correct answers, not justifying them to philosophers.

Comment author: Alsadius 29 May 2013 06:17:57PM 0 points [-]

Yeah, I was using the non-adaptive brain as a baseline reducto ad absurdum. Obviously, it's possible to do better - the computing power wasted in the above design would be monumental, and the human brain is not such a model of efficiency that I don't think you can do better by throwing a few extra orders of magnitude at it. But it's something that even an AI skeptic should recognize as a possibility.

Comment author: Eliezer_Yudkowsky 29 May 2013 06:15:34PM 1 point [-]

Penrose would claim not to understand how 'collapse' occurs.

Comment author: Eliezer_Yudkowsky 29 May 2013 06:13:24PM 2 points [-]

Quantum tunneling != quantum computing.

Quantum 'randomness' != quantum computing. No one has ever introduced, even in principle, a cognitive algorithm that requires quantum 'randomness' as opposed to thermal noise.

Comment author: Eliezer_Yudkowsky 29 May 2013 06:12:26PM 3 points [-]

Quantum effects or quantum computation? Technically our whole universe is a quantum effect, but most of it can't be regarded as doing information processing, and of the parts that do information processing, we don't yet know of any that are faster on account of quantum superpositions maintained against decoherence.

Comment author: shinoteki 29 May 2013 06:11:51PM 1 point [-]

If there is a feasible psuedorandomness generator that is computationally indistinguishable from randomness, then randomness is indeed not necessary. However, the existence of such a pseudorandomness generator is still an open problem.

Comment author: jsteinhardt 29 May 2013 05:30:48PM 2 points [-]

So even if you think the human mind uses quantum computation, this doesn't mean that the same thing can't be done on a classical machine.

An exponential slowdown basically means that it can't be done. If you have an oracle for EXPTIME then you're basically already set for most problems you could want to solve.

Comment author: jsteinhardt 29 May 2013 05:27:07PM 4 points [-]

How could "true randomness" be required, given that it's computationally indistinguishable from pseudorandomness?

Comment author: Manfred 29 May 2013 05:27:05PM *  0 points [-]

My previous reply wasn't very helpful, sorry. Let me reiterate what I said above: making assumptions isn't so much rational as unavoidable. And so you ask "then, should we believe in the external world?"

Well, this question has two answers. The first is that there is no argument that will convince an agent who didn't make any assumptions that they should believe in an external world. In fact, there is no truth so self-evident it can convince any reasoner. For an illustration of this, see What the Tortoise Said to Achilles. Thus, from a perspective that makes no assumptions, no assumption is particularly better than another.

There is a problem with the first answer, though. This is that "the perspective that makes no assumptions" is the epistemological equivalent of someone with a rock in their head. It's even worse than the tortoise - it can't talk, it can't reason, because it doesn't assume even provisionally that the external world exists or that (A and A->B) -> B. You can't convince it of anything not because all positions are unworthy, but because there's no point trying to convince a rock.

The second answer is that of course you should believe in the external world, and common sense, and all that good stuff. Now, you may say "but you're using your admittedly biased brain to say that, so it's no good," but, I ask you, what else should I use? My kidneys?

If you prefer a slightly more sophisticated treatment, consider different agents interpreting "should we believe in the external world" with different meanings of the word "should". We can call ours human_should, and yes, you human_should believe in the external world. But the word no_assumptions_should does not, in fact, have a definition, because the agent with no assumptions, the guy with a rock in his head, does not assume up any standards to judge actions with. Lacking this alternative, the human_reasonable course of action is to interpret your question as ""human_should we believe in the external world," to which the answer is yes.

Comment author: jshibby 29 May 2013 05:26:26PM 0 points [-]

Actually, early empiricists wanted to consider tautologies just those that were confirmed by any evidence whatsoever. (This enables an empiricist to have a pure evidential base of only empirical events.) It doesn't sound great, but some folks like the conclusion that anything (or anything possible) is evidence for a tautology.

Comment author: jsteinhardt 29 May 2013 05:21:27PM 1 point [-]

Really? I think it's plausible that quantum effects play an important role in the brain, but I'd be very surprised if that was actually an obstacle to AI.

Comment author: jsteinhardt 29 May 2013 05:13:25PM 1 point [-]

This post made me realize that the following fun fact: if AI were in BQP but not in BPP, then that would provide non-negligible evidence for anthropics being valid.

Comment author: jsteinhardt 29 May 2013 05:11:15PM 1 point [-]

While I doubt AI needs QC, I don't think this argument works. Your same argument seems to rule out birds exploiting quantum phenomena to navigate, yet they are thought to do so.

Comment author: jsteinhardt 29 May 2013 05:03:57PM 0 points [-]

That seems a little extreme; presumably there's a difference between using statistical tests as a heuristic you don't understand, and using statistical tests in a well-understood way, even if you're not deriving finance from first principles.

Also, CAPM isn't actually true (i.e. assumptions never hold in the real world), whereas statistics is.

Comment author: falenas108 29 May 2013 05:02:45PM *  0 points [-]
Comment author: kenzi 29 May 2013 04:47:03PM 0 points [-]

Nope, you're not the only one. Yikes! Thanks for the heads up, we'll look into it.

Comment author: Tuxedage 29 May 2013 04:25:27PM 0 points [-]

This seems to be an argument against hedonistic utilitarianism, but not utilitarianism in general.

Comment author: shminux 29 May 2013 04:24:20PM 0 points [-]

And that's basically a classical device, certainly doesn't need any coherence.

I suppose we ought to define what "classical" and "quantum" means.

Comment author: DSherron 29 May 2013 04:18:09PM 1 point [-]

"More susceptible" is not the same as "susceptible". If it's bigger than an atom, we don't need to take quantum effects into account to get a good approximation, and moreover any effects that do happen are going to be very small and won't affect consciousness in a relevant way (since we don't experience random changes to consciousness from small effects). There's no need to accurately model the brain to perfect detail, just to roughly model it, which almost certainly does not involve quantum effects at all.

Incidentally, there's nothing special about quantum randomness. Why should consciousness be related to splitting worlds in a special way? Once you drop the observer-focused interpretations, there's nothing related between them. If the brain needs randomness there are easier sources.

Comment author: JoshuaZ 29 May 2013 04:02:10PM 0 points [-]

Quantum computers can be simulated on classical computers with exponential slow down. So even if you think the human mind uses quantum computation, this doesn't mean that the same thing can't be done on a classical machine. Note also that BQP (the set of efficiently computable problems by a quantum computer) is believed (although not proven) to not contain any NP complete problems.

Note also that at a purely practical level, since quantum computers can do a lot of things better than classical computers and our certainty about their strength is much lower, trying to run an AI on a quantum computer is a really bad idea if you take the threat of AI going FOOM seriously.

Comment author: Luke_A_Somers 29 May 2013 03:57:49PM 1 point [-]

Even if the Mersenne twister isn't good enough, you could still get a quantum noise generator hooked up. And that's basically a classical device, certainly doesn't need any coherence.

Comment author: Luke_A_Somers 29 May 2013 03:56:05PM 2 points [-]

Note for the downvoters of the above: I suspect you're downvoting because you think a complete hardware replacement of neurons would result in long-term adaptibility. This is so, but is not what was mentioned here - replacing each neuron with a momentarily equivalent chip that does not have the ability to grow new synaptic connections would provide consciousness but would run into long-term problems as described.

Comment author: Nisan 29 May 2013 03:50:25PM 4 points [-]

Skimming the article you linked, it looks like Penrose believes human mathematical intuition comes from quantum-gravitational effects. So on Penrose's view it might be possible that AGI requires a quantum-gravitational hypercomputer, not just a quantum computer.

Comment author: Luke_A_Somers 29 May 2013 03:50:20PM 4 points [-]

We have natural intelligence made of meat, processing by ion currents in liquid. Ion currents in liquid have an extremely short decoherence time, way too short to compute with.

Are you arguing with students of Deepak Chopra?

Comment author: Creutzer 29 May 2013 03:49:42PM *  0 points [-]

Finally, how do I speak intelligently on the Contextualist v.s Invariantist problem? I can see in basic that it is an empirical problem and therefore not part of abstract philosophy, but that isn't the same thing as having an answer. It would be good to know where to look up enough neuroscience to at least make an intelligent contribution to the discussion.

Invariantism, in my opinion, is rooted precisely in the failure to recognize that this is an empirical and ultimately linguistic question. I'm not sure how neuroscience would enter into it, actually. Once you recognize that it's an empirical issue, it becomes obvious that the usage of various epistemological terms - like that of most other terms - is highly context-dependent. (If you don't think this is obvious, have a look at experimental philosophy.) With that usage, you have an actual explanandum, and if you want a theory that derives the associated phenomena - well, do linguistics and cognitive psychology and stop calling it philosophy, because it isn't. (Of course, the problem is ridiculously hard, because nobody has a good model of how lexical meaning relates to or even depends on context, even though it obviously does.)

Note: The "you" in this comment are intended generically, not referring particularly to the OP or any reader.

Comment author: DSherron 29 May 2013 03:38:11PM 0 points [-]

For 1) the answer is basically to figure out what bets you're willing to make. You don't know anything, for strong definitions of know. Absolutely nothing, not one single thing, and there is no possible way to prove anything without already knowing something. But here's the catch; beliefs are probabilities. You can say "I don't know that I'm not going to be burned at the stake for writing on Less Wrong" while also saying "but I probably won't be". You have to make a decision; choose your priors. You can pick ones at random, or you can pick ones that seem like they work to accomplish your real goals in the real world; I can't technically fault you for priors, but then again justification to other humans isn't really the point. I'm not sure how exactly Coherentists think they can arrive at any beliefs whatsoever without taking some arbitrary ones to start with, and I'm not sure how anyone thinks that any beliefs are "self-evident". You can choose whatever priors you want, I guess, but if you choose any really weird ones let me know, because I'd like to make some bets with you... We live in a low-entropy universe; simple explanations exist. You can dispute how I know that, but if you truly believed any differently then you should be making bets left and right and winning against anyone who thought something silly like that a coin would stay 50/50 just because it usually does. Basically, you can't argue anything to an ideal philosopher of perfect emptiness, any more than you can argue anything to a rock. If you refuse to accept anything, then you can go do whatever you want (or perhaps you can't, since you don't know what you want), and I'll get on with the whole living thing over here. You should read "The Simple Truth"; it's a nice exploration of some of these ideas. You can't justify knowledge, at all, and there's no difference between claiming an arbitrary set of axioms and an arbitrary set of starting beliefs (they are literally the same thing), but you can still count sheep, if you really want to. 2) is mostly contained in 1), I think.

3) Why do you need empirical evidence? What could that possibly show you? I guess you could theoretically get a bunch of Contextualista and Invariantists together and show that most of them think that "know" has a fundamental meaning, but that's only evidence that those people are silly. Words are not special. To draw from your lower comment to me, "a trout is a type of fish" is not fundamentally true, linguistically or otherwise. It is true when you, as an English speaker, say it in an English forum, read by English speakers. Is "Фольре є омдни з дівви риб" a linguistic truth? That's (probably) the same sentence in a language picked at random off Google Translate. So, is it true? Answer before you continue reading. Actually, I lied. That sentence is gibberish; I moved the letters around. A native speaker of that language would have told you it was clearly not true. But you had no idea whether it was or wasn't; you don't speak that language, and for that matter neither do I. I could have just written profanity for all I know. But the meanings are not fundamental to the little squiggles on your computer screen; they are in your mind. Words are just mental paintbrush handles, and with them we can draw pictures in each other's minds, similar to those in our own. If you knew that I had had some kind of neurological malfunction such that I associated the word "trout" to a mental image of a moderately sized land-bound mammal, and I said "a trout is a type of fish", you would know that I was wrong (and possibly confused about what fish were). If you told me "a trout is a type of fish", without clarifying that your idea of trout was different from mine, you'd be lying. Words do not have meanings; they are simply convenient mental handles to paint broad pictures in each other's minds. "Know" is exactly the same way. There is no true, really real more real than that other one meaning of "know", just the broad pictures that the word can paint in minds. The only reason anyone argues over definitions is to sneak in underhanded connotations (or, potentially, to demand that they not be brought in). There is no argument. Whatever the Contextualists wants to mean by "know" can be called "to flozzlebait", and whatever the Invariantists wants to mean by it can be called "to mankieinate". There, now that they both understand each other, they can resolve their argument... If there ever even was one (which I doubt).

Comment author: Viliam_Bur 29 May 2013 03:21:44PM 6 points [-]

To me it seems straightforward: Intelligence is magical. Classical computers are not magical. Quantum computing is magical. Therefore we need quantum computing for AI.

However, if after a few years quantum computing becomes non-magical, it will become obvious that we need something else.

Comment author: David_Gerard 29 May 2013 03:11:06PM 6 points [-]

It's the quantum syllogism:

  1. I don't understand quantum.
  2. I don't understand consciousness
  3. Therefore, consciousness involves quantum.

(1. need not apply e.g. if you are Roger Penrose, but it's still logically fallacious.)

Comment author: shminux 29 May 2013 03:08:24PM 1 point [-]

No serious neurologists actually consider quantum effects inside microtubules or arrangements of phosp[h]oryulation on microtubules or whatever important for neuron function.

Actually, protein phosphorylation (like many other biochemical and biophysical processes, such as ion channel gating) is based on quantum tunneling. It may well be irrelevant, as the timing of the process can probably be simulated well enough with pseudo-random numbers, but on an off-chance that "true randomness" is required, a purely classical approach might be inadequate.

Comment author: diegocaleiro 29 May 2013 03:00:39PM 0 points [-]

You are a mean and chaotic evil commenter 95% of times. I do love it when you do the opposite... or, well, at least, something sarcastic and fun.

Yes, indeed.

Comment author: Viliam_Bur 29 May 2013 02:53:07PM 1 point [-]

On the other hand, a well-planned divorce could be an important part of a financial plan. "Early Retirement Extreme Extreme" -- get retired within a week! :D

Comment author: Viliam_Bur 29 May 2013 02:50:09PM 1 point [-]

Could you give me examples of "self-evident truths" other than mathematical equations or tautologies? To me it seems that if you are allowed to use only things that are true in all possible universes, you can only get to conclusions that are true in all possible universes. (In other words, there is no way I could ever believe "my name is Viliam" using only the Strong Foundationalist methods.)

Comment author: DSherron 29 May 2013 02:41:48PM 0 points [-]

You're right, my statement was far too strong, and I hereby retract it. Instead, I claim that philosophy which is not firmly grounded in the real world such that it effectively becomes another discipline is worthless. A philosophy book is unlikely to contain very much of value, but a cognitive science book which touches on ideas from philosophy is more valuable than one which doesn't. The problem is that most philosophy is just attempts to argue for things that sound nice, logically, with not a care for their actual value. Philosophy is not entirely worthless, since it forms the backbone of rationality, but the problem is the useful parts are almost all settled questions (and the ones that aren't are effectively the grounds of science, not abstract discussion). We already know how to form beliefs that work in the real world, justified by the fact that they work in the real world.. We already know how to get to the most basic form of rationality from whence we can then use the tools recursively to improve them. We know how to integrate new science into our belief structure. The major thing which has traditionally been a philosophical question which we still don't have an answer to, namely morality, is fundamentally reduced to an empirical question: what do humans in fact value? We already know that morality as we generally imagine it is a fundamentally a flawed concept, since there are no moral laws which bind us from the outside, but just the fact that we value some things that aren't just us and our tribe. The field is effectively empty of useful open questions (the justification of priors is one of the few relevant ones remaining, but it's also one which doesn't help us in real life much).

Basically, whether philosophers dispute something is essentially un-correlated to whether there is a clear answer on it or not. If you want to know truth, don't talk to a philosopher. If you pick your beliefs based on strength of human arguments, you're going to believe whatever the most persuasive person believes, and there's only weak evidence that that should correlate with truth. Sure, philosophy feeds into rationality and cog-sci and mathematics, but if you want to figure out which parts do so in a useful way, go study those fields. The problem with philosophy as a field is not the questions it asks but the way it answers them; there is no force that drives philosophers to accept correct arguments that they don't like, so they all believe whatever they want to believe (and everyone says that's ok). I mean, anti-reductionism? Epiphenomenalism? This stuff is maybe a little better than religious nonsense, but it still deserves to be laughed at, not taken as a serious opponent. My problem is not the fundamentals of the field, but the way it exists in the real world.

Comment author: CellBioGuy 29 May 2013 02:40:26PM 7 points [-]

No serious neurologists actually consider quantum effects inside microtubules or arrangements of phosporyulation on microtubules or whatever important for neuron function. They're all either physicists who don't understand the biology or computer scientists who don't understand the biology. Nothing happens in neural activity or long-term-potentiation or other processes that cannot be accounted for by chemical processes, even if we don't understand exactly the how of some of them. The open questions are mostly exactly how neurons are able to change their excitability and structure over time and how they manage to communicate in large scale systems.

Comment author: shminux 29 May 2013 02:32:06PM 2 points [-]

It's not possible to discuss "the amount of computations required" without specifying a model of computation.

I agree, there are more steps in between "AI is hard" and "we need QC".

However, from what I understand, those who say "QC is required for AI" just use this "argument" (e.g. "AI is at least as hard as code breaking") as an excuse to avoid thinking about AI, not as a thoughtful conclusion from analyzing available data.

Comment author: OrphanWilde 29 May 2013 02:01:05PM 0 points [-]

I think there's an important distinction to be drawn between human-level AI and human-like AI, as far as the "quantum mind" hypothesis and its relationship to quantum computing goes. It could be a necessary ingredient to consciousness while being unimportant for intelligence more generally.

Comment author: GeraldMonroe 29 May 2013 01:40:45PM -2 points [-]

An optimal de novo AI, sure. Keep in mind that human beings have to design this thing, and so the first version will be very far from optimal. I think it's a plausible guess to say that it will need on the order of the same hardware requirements as an efficient whole brain emulator.

And this assumption shows why all the promises made by past AI researchers have so far failed : we are still a factor of 10,000 or so away from having the hardware requirements, even using supercomputers.

Comment author: BerryPick6 29 May 2013 01:36:19PM 0 points [-]

Here is one hand...

Comment author: gothgirl420666 29 May 2013 01:23:12PM *  2 points [-]

Oh, I do this.

The name is kind of misleading... I thought post meant "after" in this context, and I was excited to see what this new exciting form of narcissism was.

Comment author: Desrtopa 29 May 2013 01:21:14PM 2 points [-]

You might try reading Yvain's summary of Reaction. I can't guarantee it's the single most accurate description of the philosophy in existence, but it's probably the clearest.

Comment author: OrphanWilde 29 May 2013 01:18:28PM 0 points [-]

I'm curious to know how you expect Bayesian updates to work in a universe in which empiricism doesn't hold. (I'm not denying it's possible, I just can't figure out what information you could actually maintain about the universe.)

Comment author: Baughn 29 May 2013 01:15:18PM 0 points [-]

It's feeling enjoyment from things I dislike, and failing to pursue goals I do share. It has little value in my eyes.

Comment author: Carinthium 29 May 2013 12:49:42PM 1 point [-]

I'm doing a philosophy degree for two reasons. The first is that I enjoy philosophy (and a philosophy degree gives me plenty of opportunities to discuss it with others). The second is that Philosophy is my best prospect of getting the marks I need to get into a Law course. Both of these are fundamentally pragmatic.

1: Any Coherentist system could be remade as a Weak Foundationalist system, but the Weak Foundationalist would be asked why they give their starting axioms special priviledges (hence both sides of my discussion have dissed on them massively).

The Coherentists in the argument have gone to great pains to say that "consistency" and "coherence" are different things- their idea of coherence is complicated, but basically involves judging any belief by how well interconnected it is with other beliefs. The Foundationalists have said that although they ultimately resort to axioms, those axioms are self-evident axioms that any system must accept.

2: Could you clarify this point please? Superficially it seems contradictory (as it is a principle that cannot be demonstrated empirically itself), but I'm presumably missing something.

3: About the basic philosophy of language I agree. What I need here is empirical evidence to show that this applies specifically to the Contextualist v.s Invariantist question.

Comment author: Carinthium 29 May 2013 12:44:12PM 0 points [-]

Not quite- I had several questions, and you're somewhat misinterpreting the one you've discussing. I'll try and clarify it to you. There are two sides in the argument, the Foundationalists (mostly skeptics) and the Coherentists. So far I've been Foundationalist but not committed on skepticism. Logically of course there is no reason to assume that one or the other is the only possible posistion, but it makes a good heuristic for quick summary of what's been covered so far.

-The Foundationalists in this particular argument are Strong Foundationalists (weak Foundationalism got thrown out at the beginning), who contend that you can only rationally believe something if you can justify it based on self-evident truths (in the sense that they must be true in any possible universe) or if you can infer them from such truths.

-The Coherentists in this particular argument contend basically that all beliefs are ultimately justified by reference to each other. This is circular, and yet justified.

-The Foundationalists have put the contention that probability is OFF THE TABLE. This is because it is impossible to create a concept of probability that is not simply a subjective feeling that does not rest on the presumption that empirical evidence is valid (which they dispute). This gets back to their argument that it is IRRATIONAL to believe in the existence of the world.

-The Coherentists countered with the concept of "tenability"- believing X provisionally but willing to discard it should new evidence come along.

-I have already, arguing close to the Foundationalist side, pointed out that just because humans DO reason in a certain way in practice does not give any reason for believing it is a valid form of reasoning.

-Both sides have agreed that purely circular arguments are off the table. Hence, both the Foundationalists and the Coherentists have agreed not to use any reference to actual human behaviour to justify one theory over the other.

View more: Next