Anyone else notice that Ancestry.com signed up almost 2 million people on Thanksgiving Weekend in the US for their gene chip. This is pushing their DNA database towards 8 million! They should hit 10 MEG in the first half of 2018. 23andme is at 3 million and is also aiming for 10 million!
With such sample sizes, the entire genetic architecture of human intelligence could unlock at almost
any moment. For example, a simple and anonymous survey might be emailed to these customers and they could be asked to associate the number of years of school to their ancestry profiles. The actual assigned identifiers could be completely recoded in order that privacy was completely protected. With even a modest response rate of 20-40 % the genetics of EA and IQ would be known to a high degree of certainty. With such a sample, we would have entered the Compressed Sensing phase transition.
Unlocking this genetic information would allow us to enter an entirely new era of human experience.
Thank you for the link, James.
The remarks by J.C. Collados do confirm (especially if you read about the TPU’s employed in the AGZ system), that systems such as AGZ have truly remarkable computing power. But Collados’s remarks reinforce my view that systems such as AGZ are very far from possessing the human capability for performing many functions in an ill-defined and often novel environment. Specifically, he says:
It seems unrealistic to think that many situations in real-life can be simplified to a fixed predefined set of rules, as it is the case of chess, Go or Shogi. Additionally, not only these games are provided with a fixed set of rules, but also, although with different degrees of complexity, these games are finite, i.e. the number of possible configurations is bounded. This would differ with other games which are also given a fixed set of rules. For instance, in tennis the number of variables that have to be taken into account are difficult to quantify and therefore to take into account: speed and direction of wind, speed of the ball, angle of the ball and the surface, surface type, material of the racket, imperfections on the court, etc.
This is almost the whole of my point. Humans are versatile, whereas machines, including chess and go playing computers are, as yet, brilliant in only a very narrow way.
In addition, I contend:
First, that the emulation of the neural networks that AGZ uses is simply an algorithm that, at least in theory, could be used by human calculators, although to compete with AGZ one might need a few million, or billion or trillion human calculators calculating for a time equal to a large part of the age of the universe.
Second, that not only could most of those slow poke human calculators beat AGZ at tennis on a windy day, on a wet grass court, with the sun in their eyes, but some of them could write a half decent sonnet, too, and certainly better than any machine.
Well undoubtedly computers are extraordinary in the rate and accuracy with which they perform calculations. But All that AphaGo Zero does, so far as I understand, is perform a series of calculations specified by programmers, humans that is. True, those programs may stipulate that the computer is to modify its computational routine according to the results of its earlier computations, but the there is nothing new here.
I find the achievements extraordinary precisely because as computers developed to do mathematical calculations very fast, people consoled themselves by saying that computers could not cope with the high level strategic game of chess
Went hunting for another view, and found a very measured, very critical one!
https://medium.com/@josecamachocollados/is-alphazero-really-a-scientific-breakthrough-in-ai-bf66ae1c84f2
This questions the achievement in another way, namely that the results have not been given openly and in sufficient detail. This is different from your argument about it all being computing, but I am sure you will want to look at it. I think it makes valid points, many I did not think about.
This is almost the whole of my point. Humans are versatile, whereas machines, including chess and go playing computers are, as yet, brilliant in only a very narrow way.In addition, I contend:First, that the emulation of the neural networks that AGZ uses is simply an algorithm that, at least in theory, could be used by human calculators, although to compete with AGZ one might need a few million, or billion or trillion human calculators calculating for a time equal to a large part of the age of the universe. Second, that not only could most of those slow poke human calculators beat AGZ at tennis on a windy day, on a wet grass court, with the sun in their eyes, but some of them could write a half decent sonnet, too, and certainly better than any machine.
It seems unrealistic to think that many situations in real-life can be simplified to a fixed predefined set of rules, as it is the case of chess, Go or Shogi. Additionally, not only these games are provided with a fixed set of rules, but also, although with different degrees of complexity, these games are finite, i.e. the number of possible configurations is bounded. This would differ with other games which are also given a fixed set of rules. For instance, in tennis the number of variables that have to be taken into account are difficult to quantify and therefore to take into account: speed and direction of wind, speed of the ball, angle of the ball and the surface, surface type, material of the racket, imperfections on the court, etc.
Well undoubtedly computers are extraordinary in the rate and accuracy with which they perform calculations. But All that AphaGo Zero does, so far as I understand, is perform a series of calculations specified by programmers, humans that is. True, those programs may stipulate that the computer is to modify its computational routine according to the results of its earlier computations, but the there is nothing new here.
I find the achievements extraordinary precisely because as computers developed to do mathematical calculations very fast, people consoled themselves by saying that computers could not cope with the high level strategic game of chess
But All that AphaGo Zero does, so far as I understand, is perform a series of calculations specified by programmers, humans that is.
No. AlphaGo Zero specifies far less. That is the whole point. Less. Hence the name, Zero.
I find the achievements extraordinary precisely because as computers developed to do mathematical calculations very fast, people consoled themselves by saying that computers could not cope with the high level strategic game of chess
Well undoubtedly computers are extraordinary in the rate and accuracy with which they perform calculations. But All that AphaGo Zero does, so far as I understand, is perform a series of calculations specified by programmers, humans that is. True, those programs may stipulate that the computer is to modify its computational routine according to the results of its earlier computations, but the there is nothing new here.
The fact that AGZ can win a strategic game against a human by virtue of its superiority in computational speed would surely not have seemed extraordinary to Alan Turing who invented the universal computing engine on which the AGZ program runs.
Almost certainly, winning a game of Go against the world champion is a piffling task compared with finding a couple of billion dollars worth of oil beneath a bunch of salt domes in the Gulf of Mexico, as was recently accomplished by BP’s supercomputer — supposedly the world’s most powerful commercial research computer.
What is not impressive about AI, to date, is its inability to emulate human language, or understanding, which I contend is impossible to achieve without a lifetime of human-like experience during which the program is continually modified by every sensory input. Perhaps some robotic humanoid will achieve this level of performance someday. However, how soon that day will come, seems highly uncertain. It’s not even known within vast limits how great the brain’s processing capacity is. Is it something like one binary operation per second per neuron (10 ^11), or per dendrite (10^15) or per tubulin molecule (10^32)? And does the brain just a rather slow and mushy digital computer, or does it run on a quantum basis?
Altogether, it seems vastly premature to write off the human brain as a useful information processing device, and in most respects the most powerful information processing device by far in all but highly specialized domains.
No. AlphaGo Zero specifies far less. That is the whole point. Less. Hence the name, Zero.
But All that AphaGo Zero does, so far as I understand, is perform a series of calculations specified by programmers, humans that is.
Are they kidding?
24 South Africans of various ethoracial groups is the first GWAS conducted on African soil?
We will need GWAS into the millions in Africa to unravel their diversity.
https://www.sciencedaily.com/releases/2017/12/171212102036.htm
If this is the singularity, then I think more emphasis would be highly appropriate.
This is not “oh, I’ll just go bring in the dumpster and pick up the dry cleaning after a bad hair, and I nearly forgot the singularity is on the way.”
No, siree. IF this is the singularity, then the people really deserve fair warning.
THIS MIGHT BE THE SINGULARITY
I REPEAT
THIS MIGHT BE THE SINGULARITY
If so, life certainly could become somewhat more interesting soon.
I tend to agree with your interpretation, subject only to the proviso that the next achievements are in non-game domains. Analysis of the genome would certainly be one of those. Yes, this might be the singularity.
This is beginning to feel very much as though we are now being drawn into the Singularity vortex. The original story for this blog was from October of this year. Now here we are all but a month later and the next generation of this technology has already made another breakthrough. What will it take for thoughtful people to become worried?
If AlphaGo Zero is a module that can be applied without substantial modification to a wide range of problems, then we have clearly entered the Singularity event horizon.
AlphaGo Zero is demonstrating a highly generalizable form of learning ability that should give us all something to contemplate. It only required about ten programmers and a few years to work this out. Of course now this knowledge can be shared with anyone interested. Apparently quite a few people are interested in deep learning as there has been exponential growth in AI college courses. I suppose it will not be long before AI content crops up in kindergarten curricula.
I am greatly looking forward to what AlphaGo might discover about the human genome. We now have a vast dataset that it could peer into and perhaps completely unlock our genome. It would be so symbolically appropriate if the first non-game domain AlphaGo Zero demonstrated superhuman ability in were the unraveling of the informational code that defines our humanity.
Any activity can be understood from the perspective of a game.
Very probably so. It will be interesting to see what AI achieves by treating all life as a game.
In what way, James, are these extraordinary achievements? Inasmuch as computers have been out-computing humans for decades and these "achievements," extraordinary or otherwise, amount to nothing more than a demonstration of the superiority of a computer over a human at the business of computing, there seems nothing extraordinary here other than the task to which the computer has been applied. There are many other machines and devices that outdo humans at just about everything from washing dishes to knitting socks, or flying airplanes. That someone has programmed a machine to contest humans in what until now has been a purely recreational activity seems to prove nothing new. Surely, if the incentive were sufficient, someone would build a robot to win Wimbledon, shoot hole-in-one at every golf course in the world, or catch trout more efficiently than any angler.What seems most significant, is that computers lack the diagnostic features of human intelligence, including competence with ordinary language, consciousness and, hence, empathy, or the creativity that underlies great art, mathematics, etc. Yes, computers are a great hazard to humanity, nuclear missile guidance systems, for example. But that hazard arises from the deliberate actions of humans, not of any innate tendency of computers, which lack an innate tendency to do anything.
I agree that these are extraordinary achievements. Now it needs to be tested in another, non-game, domain.
I find the achievements extraordinary precisely because as computers developed to do mathematical calculations very fast, people consoled themselves by saying that computers could not cope with the high level strategic game of chess. When a computer beat Kasparov the tune changed slightly, to asserting that computers could not win in an even more strategic game like Go. Now Go gamers have fallen to DeepMind AlphGo, and some are still looking for games that computers can’t win against humans. I want to find non-game domains in which humans excel. For example, medical diagnosis? Investment strategies? New drug discoveries? It is likely that deep learning networks will do well on many of these, but perhaps not. We shall see.
The other point is that it is not just raw computer power which has done this, but the way that the programs have evolved to be self-teaching. This is rate as the greatest change.
Well undoubtedly computers are extraordinary in the rate and accuracy with which they perform calculations. But All that AphaGo Zero does, so far as I understand, is perform a series of calculations specified by programmers, humans that is. True, those programs may stipulate that the computer is to modify its computational routine according to the results of its earlier computations, but the there is nothing new here.
I find the achievements extraordinary precisely because as computers developed to do mathematical calculations very fast, people consoled themselves by saying that computers could not cope with the high level strategic game of chess
Current apps can play a game by rules but are not intelligent in the general sense that human are. That is, no app or computer is capable of activities completely unlike it was programmed for in the way people can use their ultra-complex algorithms (naturally selected for keeping the bearers alive and successfully passing on their genes) to do things like driving a car though a city. While humans can be replaced as drivers by apps, and in principle it is sort of achievable already, current state of the art so called AI apps follow the rules because they are inherently limited to that, while human drivers are deterred from driving dangerously by punishment.
The major concern about AI is not about apps making mistakes driving cars, but an exterminating humanity. AI will get to the plane of human intellect and beyond sooner or later. Now, humans’ general problem solving ability lets them identify problems and strategise a solution. Sometimes they work out that it would be better to seem to be playing the game, but secretly break the rules. While humans killing other humans in a car is usually due to nothing more than someone’s carelessness (as you put it “irrational”) I dare say some people have committed murder with a vehicle so as to make it look like an accident.
Well, a strongly super-intelligent AI would not have the same motivations as a human murderer, but by the same token any super-intelligent AI would not be like a selfless and altruistic person, or even a highly intelligent nerdy human. How something as alien as an advanced AI could be controlled is without precedents to guide us.
There is no way to know how a super intelligent AI would interpret any prime directive humans tried to give it. No way to stop it immediately deciding to feign low intelligence to keep humanity oblivious of the danger they were in. There is no way to know what pure rationality applied to its situation would dictate for an AI super-intelligence, and given that such an AI would have relatively unlimited potential means for totally eliminating the threat to its existence that humans might pose, no way to reliably deter it.
Any activity can be understood from the perspective of a game.
The big problem is people. For people, the rules of the game are typically
only regarded as a guideline, not as strict and absolutely enforced codes
of conduct.
When you are on the road, how certain can you be that some other driver
will rigidly adhere to the rules of the road? On some roads on a Saturday
night 20% or more of drivers will be impaired.
AI applications have been held back for such a long time largely because the
standards that they are expected to maintain are much higher than that
of people. An automated, fully networked transportation system
that rigidly followed the rules of the road could have been implemented
years and years ago. The big hold back is trying to engineer around human
irrationality. It is somewhat surprising how much popular imagination has been
devoted to the “killer robot” meme, when the “killer human” genre is so prevalent.
The benefits that AI can offer us will be massive. Why should there be any road
“accidents” ? With AI, it is quite likely that over the near term such accidents might
disappear.
Nonetheless, Alpha Go Zero’s next assignment could be to consider games such as
Go, Shogi, or chess which did not have such clear and rigid rules. For example,
a random element could be introduced to the game so that the program would have
to maximize its objective function within the context of an uncertain human
generated reality.
Any activity can be understood from the perspective of a game.Very probably so. It will be interesting to see what AI achieves by treating all life as a game.
I agree that these are extraordinary achievements. Now it needs to be tested in another, non-game, domain.
In what way, James, are these extraordinary achievements?
Inasmuch as computers have been out-computing humans for decades and these “achievements,” extraordinary or otherwise, amount to nothing more than a demonstration of the superiority of a computer over a human at the business of computing, there seems nothing extraordinary here other than the task to which the computer has been applied.
There are many other machines and devices that outdo humans at just about everything from washing dishes to knitting socks, or flying airplanes.
That someone has programmed a machine to contest humans in what until now has been a purely recreational activity seems to prove nothing new. Surely, if the incentive were sufficient, someone would build a robot to win Wimbledon, shoot hole-in-one at every golf course in the world, or catch trout more efficiently than any angler.
What seems most significant, is that computers lack the diagnostic features of human intelligence, including competence with ordinary language, consciousness and, hence, empathy, or the creativity that underlies great art, mathematics, etc.
Yes, computers are a great hazard to humanity, nuclear missile guidance systems, for example. But that hazard arises from the deliberate actions of humans, not of any innate tendency of computers, which lack an innate tendency to do anything.
I agree that these are extraordinary achievements. Now it needs to be tested in another, non-game, domain.
In what way, James, are these extraordinary achievements? Inasmuch as computers have been out-computing humans for decades and these "achievements," extraordinary or otherwise, amount to nothing more than a demonstration of the superiority of a computer over a human at the business of computing, there seems nothing extraordinary here other than the task to which the computer has been applied. There are many other machines and devices that outdo humans at just about everything from washing dishes to knitting socks, or flying airplanes. That someone has programmed a machine to contest humans in what until now has been a purely recreational activity seems to prove nothing new. Surely, if the incentive were sufficient, someone would build a robot to win Wimbledon, shoot hole-in-one at every golf course in the world, or catch trout more efficiently than any angler.What seems most significant, is that computers lack the diagnostic features of human intelligence, including competence with ordinary language, consciousness and, hence, empathy, or the creativity that underlies great art, mathematics, etc. Yes, computers are a great hazard to humanity, nuclear missile guidance systems, for example. But that hazard arises from the deliberate actions of humans, not of any innate tendency of computers, which lack an innate tendency to do anything.
I agree that these are extraordinary achievements. Now it needs to be tested in another, non-game, domain.
This is becoming more serious.
The AlphaGo Zero algorithm appears to be generalizing: first Go, and now Shogi and chess.
Alpha Go Zero just might be a general hammer that can hit anything nail like.
(See the infoproc blog)
Notice that for Go, Shogi, and Chess, the best humans players are only able to play up to the
end of the vertical section of Alpha Go Zero’s learning curve. The deep thought region of the learning
curve is off limits to humans.
Middle Aged Vet said . . . The HORARS of war, by VFW member (I think) Gene Wolfe, describes an AI’s process of “gaining a human’s experience”, “fighting and risking death”, in a perhaps real, perhaps simulated world where battle is predominant. Not my favorite Gene Wolfe story, by far, but very insightful.
An AI writing a novel would be unlikely, but an AI celebrating the experience of reading a novel, and of updating in ways charming to an AI such a novel, real or imagined, would be, for other AIs, and maybe for us, a destination experience, like Manhattan’s summertime Mostly Mozart festivals, like the Newport Jazz weekends, like the Smithsonian ethnic cooking on the national mall festivals – (or, to throw in things of which I have no experience, “Burning Man”, “Lollapalooza”, or that Switzerland billionaire’s gathering – Gstaad?) remember, the typical AI will more or less be a private-garden creature and will look on those of us people who experienced, face to face, the cold air of winter in industrial towns, who experienced the prospect of unremembered and common but messy and difficult death, and who experienced the various emotions of disgust and pleasure and hunger and sprezzatura in a completely unrecorded way in a world unmeasured by anything like a binary set of bits, no matter how infinite-seeming in scope and unpredictable recessivity, as something only some people (us, that is) on the very horizon of possibility could have experienced, in long ago times that will never come back… and the satisfaction of updating, or riffing, on the basics of the novels written by people who lived near that horizon of possibility (or the satisfaction of riffing on even one novel – it could be even a simple Western by Max Brand or even Finnegans Wake, with the silly atheist /agnostic parts left behind) will be, in its limited way, a new form of art for them, and enough for them, in a way it would not be for us who faced that cold air of winter in all those industrial towns, industrial towns that will never come back, at those spiritually invigorating horizons of impossibility.
Not before 2085, I would guess, at the earliest, even given constant exponential increases, supported by almost constantly more efficient energy allocations. So don’t call me a dimwit, Lubos, for predicting it. We are nowhere near to that, not much nearer than we were when the first telegraph signals crossed the Western prairies, announcing – God knows what, maybe some boring president succeeded another boring president. While eventually exponential increases start getting real interesting, and start blowing past marginally more difficult conceptual barriers (limbic system, anybody?), we are of course nowhere near that yet.
canspeccy – “Bugsy Malone”, “Mariposa Sanchez”, “Beetle Bailey” , and “Horatio Hornetblower” and Spiderman are all acceptable insect-inspired names. ‘cockroach man’ was unfair – you wouldn’t call a sanitation engineer ‘garbage man’, would you, if he did not want you to? I mean, if you did the same kind-hearted work the sanitation engineers did, then it would be fair, but not otherwise. Remember – the key word was ‘kind-hearted’.
If you did still call them that after they asked you not to that would show a lack of gratitude.
Anyway, thanks for reading.
or perhaps a synthetic brain with all the information from the prior organic brain downloaded into the new one?
I love the way the Borg-minded talk about “downloading” information from the brain.
I mean, it’s not as if anyone has any idea how memories are encoded. They don’t have a clue. They don’t even have a clue as to the processing power of the brain: is it equivalent to 10^16 flops per second per brain (one per neuron)? or as Hameroff and Penrose suggest, 10^16 flops per second per brain cell, for a total of 10^32 flops per second, each cell using microtubules as computing elements performing as many operations as has generally been thought possible by the entire brain.
And what can it possibly mean to replace the brain with a synthetic one? Would this synthetic brain, acquire my consciousness by the mere action of “downloading” the information in my brain? Or would it be like an iPhone stuck in my head, dictating my actions without regard to my personal wishes. Or is it supposed to read my consciousness? In which case, on what theory of consciousness is this capability built on?
I think the AI boys are just a bunch of more or less psycho techies doing what they can to gain status by propagating terrifying BS.
When AlphaGoZero writes a novel better than anything by Tolstoy, or even by cockroach man, then we’ll begin to take it as serious competition for the human mind. First though, it will have to learn the English language, or Russian or whatever, then it will have to gain a human’s experience, of fighting and risking death for Mother Russia or to Make America Great or whatever. It will need to know about hate, fear, love, lust, the fear of God, and much else.
Then it will have to understand the human mind well enough to know what we consider to be art. Only then it might be able to write something as good as, say, the first chapter of Tolstoy’s Kreuzer Sonata, which describes nothing more exotic than a conversation among strangers taking a railway journey.
There may be another consideration here…..
Humans themselves may become the AI machine rather than the AI machine being separate from them.
We already have artificial knees, heart values, chips in some brains to help memory in the aged…..we are developing more non biological items such as lungs, blood vessels, etc……….as we begin to replace more and more biological tissue with synthetic tissue at what point will a human still be biological?……….or considered fully AI non-biological?
When the heart is replaced with a synthetic one?……….or perhaps a synthetic brain with all the information from the prior organic brain downloaded into the new one?
We may as a species wake up someday to intense and legal debate as to which of us are still biological humans and those among us who have morphed to the point where it becomes a controversy………………and then one day we are all AI and no longer biologically based anymore.
I love the way the Borg-minded talk about "downloading" information from the brain. I mean, it's not as if anyone has any idea how memories are encoded. They don't have a clue. They don't even have a clue as to the processing power of the brain: is it equivalent to 10^16 flops per second per brain (one per neuron)? or as Hameroff and Penrose suggest, 10^16 flops per second per brain cell, for a total of 10^32 flops per second, each cell using microtubules as computing elements performing as many operations as has generally been thought possible by the entire brain. And what can it possibly mean to replace the brain with a synthetic one? Would this synthetic brain, acquire my consciousness by the mere action of "downloading" the information in my brain? Or would it be like an iPhone stuck in my head, dictating my actions without regard to my personal wishes. Or is it supposed to read my consciousness? In which case, on what theory of consciousness is this capability built on?I think the AI boys are just a bunch of more or less psycho techies doing what they can to gain status by propagating terrifying BS. When AlphaGoZero writes a novel better than anything by Tolstoy, or even by cockroach man, then we'll begin to take it as serious competition for the human mind. First though, it will have to learn the English language, or Russian or whatever, then it will have to gain a human's experience, of fighting and risking death for Mother Russia or to Make America Great or whatever. It will need to know about hate, fear, love, lust, the fear of God, and much else. Then it will have to understand the human mind well enough to know what we consider to be art. Only then it might be able to write something as good as, say, the first chapter of Tolstoy's Kreuzer Sonata, which describes nothing more exotic than a conversation among strangers taking a railway journey.
or perhaps a synthetic brain with all the information from the prior organic brain downloaded into the new one?
I wish you were right, Canspeccy.
Exponential learning is not something I have ever observed in any human being.
Mozart was a pretty lousy composer for his first 200 published works.
Shakespeare’s early plays are only readable if you are a super expert in Elizabethan language.
But at a certain point Mozart went from being a clever little 20-year-old who wrote hundreds of hours of music every year with almost no suspicion of heart-felt genius, to being the musical equivalent of what Michelangelo and Titian would have been as musicians if they had more talent. Well, I do not contend that it did not happen fast. But not exponentially fast. Nothing happens exponentially fast for talented humans, and that is obviously even more true for untalented humans.
I am completely convinced that the vNs and Tolstoys and the Picassos of the world are vastly overrated. Yes they were bright but nothing they did could not have been done by many other people, given the time, the training, and the rich way of life they enjoyed.
The vNs, the Tolstoys , and the Picassos never learned at an exponential rate.
Give an AI a good or above average limbic system (and believe me, the vNs, the Tolstoys, and the Picassos, bless their little lecherous (well, not vN, he was not a lecher) hearts, did not have a very good or above average limbic system), give it time, give it a way to correct its previous mistakes if not in real time at least in sequential time – not measured as we measure it, but measured the way a talented mathematician watches other mathematicians construct a sequence and then improvise variations on that sequence – in real time – give the AI the limbic system and the understanding of our carbon based world that even a silicon-based limbic system would find congenial, and give it (the AI) the energy it takes to correct, at an exponential rate, recent previous mistakes (with the right system, probably less energy than it takes to heat a single small Volvo idling on a cold Scandinavian night underneath the aurora borealis) … well, hopefully someone will work on communicating with the happy young AIs, hopefully someone with lots of common sense. For the first few rounds, we will not bore them: maybe we never will.
Someone with lots of common sense.
All we learned from AlphaGoZero is that computers compute faster than humans, which we already knew. Far from making it “game over” for humans, it merely confirms the ever increasing power of computers to extend man’s dominion over the earth.
The possibility that AI may take over the world is worth bearing in mind, but it is probably not a realistic cause for panic. As someone pointed out, if AlphaGoZero were pitted against the world Go champion in a match using a board with 19 squares each way instead of 18, AGZ would lose.
When we see a robot with superior mathematical insight to Ramanujan, that can also cook dinner, and write a novel better than Huckleberry Finn, then we will have reason to worry.
Meantime, Bandyopadhyay et al. report conductive resonances in single neuronal microtubules, indicating the possibility of a quantum basis for mental activity and consciousness. If that is correct, then AI has a very considerable way to go before eclipsing the human mind.
There are enough really smart scientists around to make technological progress a non-trivial existential threat within a generation. If genetically super smart people become available, they need to be set to work on the problem of how to control the super-intelligent computers before they arrive, not getting digital super-intelligence here sooner.
yes I was also thinking that this could be a great driver of the technology. If super smart kids are on the way via genetic enhancement,
Sean: Rem acu tetigisti, as Jeeves used to say.
Although one wonders if (and it is a big if), given a future where there is such a thing as an AI (presumably silicon-based) that enjoys the company of humans, any given AI will predictably prefer the company of very bright humans, as the contemporary vacationer prefers the tailored tourist sites (Yucatan, Bali) , or whether the average AI will prefer the vast tremendous wilderness of ignorance and instinct that the less genetically favored among us may present as the calling card. Some people prefer the empty vastness of Wyoming to the little French Quarters of the Yucatan and Bali.
( the elite IQ guys I have met have not been all that interesting to me when they are off their favorite topics).
So if you are going to be sitting around on campus 50 years from now with a bunch of AI experts and you are trying to figure out who to ask to do most of the communicating –
the guy who reminds you of Feynman not the guy who reminds you of Dirac
the guy who reminds you of erdos not the guy who reminds you of tao
the woman who reminds you of Rose Marie not the woman who reminds you of Meryl Streep
the Joyce of Finnegans Wake not the Joyce of Ulysses
Sydney or the bush – the bush
number theorists not philosophers of science
Anselm not Aquinas
neither Dostoyevsky nor Tolstoy
Hebrew lexicology not Hittite.
Cats are, at heart, just dogs with special needs.
When thinking of infinity think of it this way – there are many bugs in this world, and over time the number of bugs might seem overwhelming: think of any given summer night and the many bugs you saw (one remembers moths most easily, but anybody who has walked with any observation on a summer night in North America knows how many more there are than that)
Now think of this – if there are lots of angels, it would be no problem for all those angels to have, at least once, deep in the summer moonlit woods (or even on moonless nights -we can afford to be generous here), or along the street-lit avenues, or just in yards and vacant lots, have comforted, in their way, each of those teeming multitudes of bugs.
Big numbers seem comfortable when you look at them that way.
Time is not a mystery – ask any single one of the trillions of angels who took time out of their busy lives to pleasantly say a word or two to every bug who has ever buzzed on any night that anyone has cared about – remember, angels are interested in people caring about each other – well, as vN said, he did not wonder why numbers and math were “easy for him” – they weren’t , of course, but that is not relevant here- what he wondered was why number and math were not similarly easy for everybody else.
I wonder if vN would be a good ambassador to AIs.
I tend to think not, at least not before the last months of his life, where he learned so much.
Someone should write a good bio of him some day.
Free advice.
Look for the performance numbers in particular:
A project within the Stanford 100 Year Study on AI, The AI Index is an initiative to track, collate, distill and visualize data relating to artificial intelligence. It aspires to be a comprehensive resource of data and analysis for policymakers, researchers, executives, journalists and others to rapidly develop intuitions about the complex field of AI.
When measuring the performance of AI systems, it is natural to look for comparisons to human performance. In the "Towards Human-Level Performance" section we outline a short list of notable areas where AI systems have made significant progress towards matching or exceeding human performance. We also discuss the difficulties of such comparisons and introduce the appropriate caveats.
Re performance numbers
http://www.theoccidentalobserver.net/2017/12/01/moneybull-an-inquiry-into-media-manipulation/
Moneyball promotes the idea that there is but one criterion for assessing success in baseball: the number of wins in a season. The game is about winning, says Brand: do whatever it takes to win. By that measure, the A’s were successful in 2002. They won the division championship, although the movie disingenuously leaves the impression that the A’s became big winners that year compared to prior years because of Beane and his clever advisor. Exactly how many more games did the A’s win in 2002 than in 2001? One. One.
Lewis in the book and Sorkin and Zaillian in the screenplay stayed clear of two valid measures of success other than winning:
The first, profits. […] Whatever his merits, and I can personally attest to this, Scott Hatteberg standing at the plate looking for a walk, and pretty much guaranteed not to give the ball a ride, and lumbering from base to base if he did get on base, was a yawn to spectators. … blasting the ball over the outfield wall makes the turnstiles spin. […]
Baseball isn’t simply about its final result — winning or losing — it about its process, what happens during the game. It is about the experience of both players and spectators during the game. It is about the quality of the game as an activity. Most fundamentally, baseball is about playing baseball.Sabermetrics, the use of statistics to guide operations, arguably has hurt the game of baseball as it is played. The emphasis on on-base averages has resulted in batters taking strikes and waiting pitchers out in an attempt to get walks and thereby increasing their OBPs. Seldom these days does a batter swing at the first pitch. Pitch counts run up. An already slow game gets even slower. Action is replaced by inaction. Assertion is replaced by passivity. The joy of the game is diminished for both players and fans. Steal attempts are fewer and the excitement of the game is diminished for both players and fans. Bunts are fewer and strategy goes out of the game. Like life, baseball is not just a destination, this and that outcome; it is also, and most basically about, a moment-to-moment experience. The quality of the moments of our lives, including the time we spend playing and watching baseball, needs to be taken into account…
There is something called the “November 2017 AI Index”: http://aiindex.org/
A project within the Stanford 100 Year Study on AI, The AI Index is an initiative to track, collate, distill and visualize data relating to artificial intelligence. It aspires to be a comprehensive resource of data and analysis for policymakers, researchers, executives, journalists and others to rapidly develop intuitions about the complex field of AI.
Look for the performance numbers in particular:
When measuring the performance of AI systems, it is natural to look for comparisons to human performance. In the “Towards Human-Level Performance” section we outline a short list of notable areas where AI systems have made significant progress towards matching or exceeding human performance. We also discuss the difficulties of such comparisons and introduce the appropriate caveats.
http://www.theoccidentalobserver.net/2017/12/01/moneybull-an-inquiry-into-media-manipulation/
Moneyball promotes the idea that there is but one criterion for assessing success in baseball: the number of wins in a season. The game is about winning, says Brand: do whatever it takes to win. By that measure, the A’s were successful in 2002. They won the division championship, although the movie disingenuously leaves the impression that the A’s became big winners that year compared to prior years because of Beane and his clever advisor. Exactly how many more games did the A’s win in 2002 than in 2001? One. One.
Lewis in the book and Sorkin and Zaillian in the screenplay stayed clear of two valid measures of success other than winning:
The first, profits. [...] Whatever his merits, and I can personally attest to this, Scott Hatteberg standing at the plate looking for a walk, and pretty much guaranteed not to give the ball a ride, and lumbering from base to base if he did get on base, was a yawn to spectators. ... blasting the ball over the outfield wall makes the turnstiles spin. [...]
Baseball isn’t simply about its final result — winning or losing — it about its process, what happens during the game. It is about the experience of both players and spectators during the game. It is about the quality of the game as an activity. Most fundamentally, baseball is about playing baseball.
Sabermetrics, the use of statistics to guide operations, arguably has hurt the game of baseball as it is played. The emphasis on on-base averages has resulted in batters taking strikes and waiting pitchers out in an attempt to get walks and thereby increasing their OBPs. Seldom these days does a batter swing at the first pitch. Pitch counts run up. An already slow game gets even slower. Action is replaced by inaction. Assertion is replaced by passivity. The joy of the game is diminished for both players and fans. Steal attempts are fewer and the excitement of the game is diminished for both players and fans. Bunts are fewer and strategy goes out of the game. Like life, baseball is not just a destination, this and that outcome; it is also, and most basically about, a moment-to-moment experience. The quality of the moments of our lives, including the time we spend playing and watching baseball, needs to be taken into account...
analogous to a fish being large enough to be caught in a net
I like that analogy (my attempts were more cumbersome). Thanks. So in the GWAS context having studies of different power (e.g. sample size) is analogous to having nets of different mesh sizes. This clearly affects mark and recapture but I haven’t looked at the math of it. In this particular case MTAG and Okbay had a fairly similar number of total detections so this may not be a big deal for the computation I did above.
Not rambling. I am using “detected at a high level of confidence” as analogous to a fish being large enough to be caught in a net, so I think the method is worth using just as a comparative measure.
I like that analogy (my attempts were more cumbersome). Thanks. So in the GWAS context having studies of different power (e.g. sample size) is analogous to having nets of different mesh sizes. This clearly affects mark and recapture but I haven't looked at the math of it. In this particular case MTAG and Okbay had a fairly similar number of total detections so this may not be a big deal for the computation I did above.
analogous to a fish being large enough to be caught in a net
yes I was also thinking that this could be a great driver of the technology. If super smart kids are on the way via genetic enhancement,
There are enough really smart scientists around to make technological progress a non-trivial existential threat within a generation. If genetically super smart people become available, they need to be set to work on the problem of how to control the super-intelligent computers before they arrive, not getting digital super-intelligence here sooner.
Good point - there are times when I would pick up one of the other classic Dune books to read ann insight or discover something I missed the first time.
not worth reading more than once, not worth reading
Hmmm - thanks for that. The wife and I are always looking for a good fantasy-genre book to read together - awaiting George Martin to wrap up Game of Thrones..
The difference between Christopher Tolkien’s and Brian Herbert’s handling of the respective father’s literary legacies is so big!
I might check it out to see what other people didn't like. I simply hated the multiple resorts to "deus ex machina" to keep the plot moving. If I want resort to miracles, I'll read about it in scripture.
They are maniac fans, but you may be enjoying a look at it.
Fr. Ronald Knox was once told by a friend that he liked a bit of improbability in his romances [stories, that is] as in his religion. Knox replied that he liked his religion to be true, however improbable, and he liked his stories to be probable, however untrue.
res, I am still not sure.
Why are the same fish being caught?
In a random sample of catches, having only 72 fish caught by 1 fisherman among 138 caught fish out of a population of 20,000 seems highly unlikely.
Will have to look up the betas.
There must be something quite unique about these fish.
For the near term we might be stuck with selecting embryos based on PGS.
By selecting the haploblock instead of a specific SNP, one is reasonably assured
that the beneficial allele can be chosen. With CRISPR one would not be so sure.
In terms of the market potential of nootropics, yes I was also thinking that this could be
a great driver of the technology. If super smart kids are on the way via genetic enhancement,
then everyone else will need to go nootropic to stay relevant. The market potential is enormous.
When there is an actual path to a large market, people often will show some interest.
There are enough really smart scientists around to make technological progress a non-trivial existential threat within a generation. If genetically super smart people become available, they need to be set to work on the problem of how to control the super-intelligent computers before they arrive, not getting digital super-intelligence here sooner.
yes I was also thinking that this could be a great driver of the technology. If super smart kids are on the way via genetic enhancement,
It would be interesting to take a closer look at how those individually significant SNPs are distributed around the genome. Figure 1 gives a good look at this for MTAG, but it would be nice to have the three studies merged. It also shows a decent population of not quite reaching significance areas that are suggestive.
I think my “important regions” comment is a good way to look at this. Given that, the mark and recapture analysis suggests about two thirds (110/161) of the important regions have been found. Looking at the Manhattan plot in Figure 1 these numbers seem at least somewhat plausible and presumably center around important genes (protein structure, expression, etc.).
I am not sure how to adapt the mark and recapture methodology to the GWAS reality of some SNPs giving stronger signals than others. I think it is accurate to add the caveat for the population analysis that we are only talking about SNPs at a given level of detectability (driven by both effect size AND MAF), but that idea corrupts the original MaR analysis since the different studies have different sample size/power. Not sure how well the mark and recapture methodology accounts for this, but presumably it does capture “intensity of search.” Just not intrinsic difficulty of finding.
It would be interesting to revisit the Hsu height data in the context of this discussion. Both to make an assessment of the current knowledge and assess how well mark and recapture would have predicted what was eventually found.
P.S. If this is not understandable feedback would be appreciated. I feel like I am rambling a bit.
Yep, seems low, but…..
Great idea. I don’t know much about that methodology, but taking a naive look based on https://en.wikipedia.org/wiki/Mark_and_recapture
we have Nest = K * n / k (see link for explanation of Lincoln–Petersen estimator).
Looking at the two larger studies (MTAG and Okbay) we have values (with Okbay as first visit) of
K = 70
n = 62
k = 27
Giving an estimated population of 161. That seems shockingly low to me. Perhaps less low if it is an estimate of the number of important regions and there are many causal SNPs in each region?
Has anyone looked into this in more detail?
P.S. Here is the Venn diagram again to make it easier to see where my numbers came from (and check them for error ; ):
I was somewhat surprised about the nootropic angle. The article noted that each of the SNPs would have negligible impact upon cognition. I was surprised why they then pursued nootropics. Shouldn’t the nootropics then only have a small effect?
In terms of the “money window” drug discovery is a big deal. Probably explains their focus on this.
Worth noting the difference between percent variance explained and ability to effect change in an individual. For a nutritional example, say very few people are deficient in something (say iodine in the US). Percent variance explained will be small, but the potential effect in the deficient individuals is large.
Percent variance explained is more useful for estimating population level effects.
I was disappointed with the paltry 34 SNPs that they were able to find. We should now be on an exponential wave of new discovery. I was expecting 200-300 SNPs. 34? This will be the great exponential ride of the last few years and I am ready to surf it! They increased the effective sample size well over 100K. Not sure why they did not find more.
It is important to remember the difference between all SNP hits and individually significant hits. I don’t have a clear sense of how to think about this and what numbers we should be expecting. One thing this is making even more clear to me is how hard it will be to find the true causal SNPs (required to make CRISPR useful). Especially if there multiple causal SNPs in close proximity (high LD).
Also as your figure shows, many of the SNPs in the Venn diagram are shared in common. This seems odd to me also. There are 20,000 IQ/EA variants, should it not be unlikely that of those 20,000, 36 were shared in common with the studies? (This could result from them all finding the low hanging fruit.)
I was actually more impressed by how many of the 118 were disjoint. Again, I think this figure is only looking at individually significant SNPs.
Does anyone have a clear and concise explanation of how the individually significant SNPs are chosen from the mass of nearby hits?
Will be foiled with a return to 80's rock band make-up:
for face recognition
Facial paint can be foiled by depth-sensing camera systems – at least, in sensing your specific identity.
(There’s also the issue of infrared cameras, but you can at least “hide” behind glass for those.)
Might be useful to look at these results (number of shared SNPs) from the point of view of capture/recapture methodologies, usually employed to estimate the number of fish in the sea, etc.
Fascinating Venn diagram provides a good validity measure.
Most great philosophers disagree, so most are wrong. humans are all over the place. But a strongly super intelligent AI probably could count on anything like itself coming to similar conclusions and aiming for similar goals. So a super intelligent AI, safe in the knowledge that any successor AI than humans constructed would share its final values and conclusions, might let humans turn it off for any reason . Humans would think they had learned something and shown that AI was easy to control. But they would be doubly wrong.
But human intelligence has in fact proved quite penetrating in many instances.
what would ernest borgnine say wwebd said — yes it is possible life among AIs will be, for the AIs, sort of like life at a prestigious university where the professors do not need to publish and where they get sufficient pleasures at the humble local pub, at special gatherings in their quaint but expensive homes, and on rambles in the surrounding countryside, and where the less fortunate (human) townies are kindly and gently tolerated, or at a minimum cared for the way we Americans care for our majestic national parks. For people it will sort of be like going back, for limited purposes, to the days when the gods of legend were still believed in -except this time everyone will know the gods of legend are subordinate to the real truths. In other words, the healthy people of those days – most of them genetically engineered to be at von Neumann levels, but without the ‘brainiac’ drawbacks – will know, fairly clearly, that the answers to the great questions of metaphysics and ethics and aesthetics will remain as much out of the secular (non-theological, unprayerful) reach of the AIs as those questions will remain out of our (human, non-theological, unprayerful) reach. Maybe. It could easily be worse than that.
res, this is great!
Very excited!
2017 is the breakout year for IQ/EA GWAS.
I can only hope that someone out there with a modest amount of sanity
who has adult supervision rights will open up the money window and
turbo charge this forward in 2018.
In life it is not always about being smart enough to see the future;
It is about being smart enough to look out the window and see reality and respond
accordingly. IQ/EA has broken through and clearly we are now looking to a near term
horizon when this will unlock. Stepping up now with reasonable funding for this is
money well spent. (However, perhaps the people might even get ahead of this one
and take this to social media. There are millions of gene chip results out there.)
I was somewhat surprised about the nootropic angle. The article noted that each of the SNPs would have negligible impact upon cognition. I was surprised why they then pursued nootropics. Shouldn’t the nootropics then only have a small effect?
Yet if they can go in and use the GWAS information with nootropics perhaps the 1500 IQ humans are a decade or two away after all. If we all took a closet full of supplements every day we might be super smart in no time. It is possible that the genome has not been fully saturated with SNPs yet and the right nootropic might be able to change our biochemistry even more than genetics, so it might be possible to increase our IQ even more than what could be possible with genetic variation alone. 2500 IQ humans?
I was disappointed with the paltry 34 SNPs that they were able to find. We should now be on an exponential wave of new discovery. I was expecting 200-300 SNPs. 34? This will be the great exponential ride of the last few years and I am ready to surf it! They increased the effective sample size well over 100K. Not sure why they did not find more.
Also as your figure shows, many of the SNPs in the Venn diagram are shared in common. This seems odd to me also. There are 20,000 IQ/EA variants, should it not be unlikely that of those 20,000, 36 were shared in common with the studies? (This could result from them all finding the low hanging fruit.)
In terms of the "money window" drug discovery is a big deal. Probably explains their focus on this.
I was somewhat surprised about the nootropic angle. The article noted that each of the SNPs would have negligible impact upon cognition. I was surprised why they then pursued nootropics. Shouldn’t the nootropics then only have a small effect?
It is important to remember the difference between all SNP hits and individually significant hits. I don't have a clear sense of how to think about this and what numbers we should be expecting. One thing this is making even more clear to me is how hard it will be to find the true causal SNPs (required to make CRISPR useful). Especially if there multiple causal SNPs in close proximity (high LD).
I was disappointed with the paltry 34 SNPs that they were able to find. We should now be on an exponential wave of new discovery. I was expecting 200-300 SNPs. 34? This will be the great exponential ride of the last few years and I am ready to surf it! They increased the effective sample size well over 100K. Not sure why they did not find more.
I was actually more impressed by how many of the 118 were disjoint. Again, I think this figure is only looking at individually significant SNPs.
Also as your figure shows, many of the SNPs in the Venn diagram are shared in common. This seems odd to me also. There are 20,000 IQ/EA variants, should it not be unlikely that of those 20,000, 36 were shared in common with the studies? (This could result from them all finding the low hanging fruit.)
Well certainly with the kind of logic you deploy in that sentence, human "wetware" would be useless at anything.
Understanding humanity as a product of mere natural selection, is important to understand why human “wetware” intelligence could be outmaneuvered and ousted by mere digital cogitators
Is not Google believed to be a creature of the CIA and thus at the disposal of the US military?
US military are likely far behind Google ect in AI
But human intelligence has in fact proved quite penetrating in many instances.
Darwin’s was, but his theory (showing the feasibility of artificial consciousness according to Dennett) has been seen as starting a countdown to Doomsday. Fred Hoyle said that very explicitly.
Lincoln agreed to fight a duel, Jackson actually killed someone in one. Anyway if the laws that Nazis were convicted at Nuremberg under had been equally enforced, every post WW2 American president would have been hanged.
But human intelligence has in fact proved quite penetrating in many instances.
Most great philosophers disagree, so most are wrong. humans are all over the place. But a strongly super intelligent AI probably could count on anything like itself coming to similar conclusions and aiming for similar goals. So a super intelligent AI, safe in the knowledge that any successor AI than humans constructed would share its final values and conclusions, might let humans turn it off for any reason . Humans would think they had learned something and shown that AI was easy to control. But they would be doubly wrong.
Advanced AI is going to come about in a world where robotics is doing all the hard work and solving all the problems of humanity, making lots of money for robotics corporations (which will dwarf Google) , and giving the scientists who created them tremendous status. There will be momentum to keep going among the people who matter, and fewer people will actually matter because much of the population will be comfortably unemployed in a few decades.
Thanks! That one has an interesting look at possible nootropic drug targets. The glucocorticoid (cortisol the most important) and inflammation connection is interesting.
Did you see Figure 2? It looks at overlap of the SNPs between three different studies:
Figure 4a shows the tissue hits. The pituitary showed up again.
Supplementary Table 1 has a list of SNPs (~110) from the different studies. I am having some trouble interpreting that table (e.g. reconciling it with Figure 2). It looks like they are including all matching SNPs from different studies even if not significant. But significance is not clearly marked for each study AFAICT. I tried to derive that from the p-values, but the mapping is not clear to me.
Note that that table shows different studies using different choices for reference and effect alleles further disproving Afrosapiens’ contention that the reference allele is always deleterious. (as if more proof was needed, but he still has not admitted to being wrong so …) Also notice how when the alleles are switched the Z-score changes sign.
Supplementary Table 2 has almost 20,000 SNPs with more details about each. This includes MAF as well as LD r2 for the associated individually significant SNP. I was surprised not to see MTAG p values in that table.
What are your thoughts?
Yeah, well that's the whole issue, isn't it: whether AI decides its own ends for itself, something that Norbert Weiner warned about. But for you, it seems an issue impossible to engage with constructively. Apparently you are intent on establishing that we are doomed without the slightest recourse, exemplifying if I may say so, the stupidity that you imply characterizes the whole of humanity.
we should not be surprised if a super-intelligence decides to match its ends to its own relatively unlimited means and go for total domination with one sure strike
The Victorian age was when the the first predictions of machine take over were made. What Weiner or I J Gudak said was that humans could not hand over control to robot servants because they would get bolshie as they got more intelligent. That idea was not pushed to its logical conclusion of a machine intelligence coup de main extermination of humanity until very recently. Our actual relative “stupidity” at chess or Go, and even Texas Hold em poker, indicates the default assumption for how we will fare in reality against a truly formidable digital intelligence.
Yeah, well that's the whole issue, isn't it: whether AI decides its own ends for itself, something that Norbert Weiner warned about. But for you, it seems an issue impossible to engage with constructively. Apparently you are intent on establishing that we are doomed without the slightest recourse, exemplifying if I may say so, the stupidity that you imply characterizes the whole of humanity.
we should not be surprised if a super-intelligence decides to match its ends to its own relatively unlimited means and go for total domination with one sure strike
wwebd said: We all begin, when young, as monarchists. While there may be one in a million people who would make a good king, that one in a million person is not going to be king, everybody knows that by now. One advantage the sort of person who reads this type of comment section has is that, being the sort of person who finds it worthwhile to consider other people’s arguments, it is not difficult to realize that no one person can be an effective king. Borlaug saved millions from famine – ok, but if you give him credit for those millions, you also have to give him the blame for dooming millions more, in unsurprising tributary ways, to short nasty lives in overcrowded unsanitary unbeautiful cities. von Neumann is another good example, which needs no explanation, of the limits of a very smart person.
Here is an optimistic thought – if the first generation of marginally self-aware AIs are based on people like, say, Hayek and the theologians who believed in subsidiarity, rather than on the average Ivy League celebrity STEM professor or the average tech-sector billionaire, and if there is constant competition among that first generation of AIs to keep the psychopaths and heartless programmers at bay – then there may be, in the future, the sort of co-evolution that happened, in the wetware world, between dogs and humans (with lots of suffering on the parts of dogs in the wetware world, of course, tragically – well one hopes, the mistreatment of dogs by people will not be replicated in that future world, with the humans doing the suffering that our ancestors inflicted on the dogs). (By the way, just as, if we lived on Jupiter, we would consider the Earth and the Moon twin planets, not an Earth and a moon, even so we should consider humans and dogs not as two separate species, but as a twinned species, from the scientific point of view. Just saying. )
Moving along, my optimistic point of view is that either (a) the whole human race will stupidify itself to the point where nobody will be able to supply electricity to the AIs, hence mooting the whole problem or (b) people like better smarter versions of Hayek and some of my favorite theologians (the subsidiarity guys, primarily, at least with respect to the relevant problems here) will do what has to be done to keep the first generation of self-conscious AIs from being destructive. Not that I have lots of kids, but if any of my grandchildren had the opportunity to do the right thing in this respect, I would like to think he or she would.
Look at it this way – the most powerful politicians in the United States are the presidents, and no president has ever committed a violent felony and been convicted of it. Over 200 years of powerful people not getting convicted of rape or murder or even criminal assault! (well … of course a few of them could have been . But most of them never, in a million years, would have been.) ( I am being cynical here, of course). Well, we have failed before, but we might be lucky in the future, and we only need to get that first generation of self-conscious AIs right.
Most great philosophers disagree, so most are wrong. humans are all over the place. But a strongly super intelligent AI probably could count on anything like itself coming to similar conclusions and aiming for similar goals. So a super intelligent AI, safe in the knowledge that any successor AI than humans constructed would share its final values and conclusions, might let humans turn it off for any reason . Humans would think they had learned something and shown that AI was easy to control. But they would be doubly wrong.
But human intelligence has in fact proved quite penetrating in many instances.
res, great news!
Another EA GWAS!
http://www.cell.com/cell-reports/fulltext/S2211-1247(17)31648-0
we should not be surprised if a super-intelligence decides to match its ends to its own relatively unlimited means and go for total domination with one sure strike
Yeah, well that’s the whole issue, isn’t it: whether AI decides its own ends for itself, something that Norbert Weiner warned about. But for you, it seems an issue impossible to engage with constructively. Apparently you are intent on establishing that we are doomed without the slightest recourse, exemplifying if I may say so, the stupidity that you imply characterizes the whole of humanity.
Understanding humanity as a product of mere natural selection, is important to understand why human “wetware” intelligence could be outmaneuvered and ousted by mere digital cogitators
Well certainly with the kind of logic you deploy in that sentence, human “wetware” would be useless at anything.
But human intelligence has in fact proved quite penetrating in many instances. And since we have the advantage that we can act before the danger is immediately upon us, the contest does not look so unequal. Although of course we have to combat the resistance of those like yourself who seem to think we have no choice but to accept our imminent extinction by the creation of our own hand and brain.
US military are likely far behind Google ect in AI
Is not Google believed to be a creature of the CIA and thus at the disposal of the US military?
Darwin's was, but his theory (showing the feasibility of artificial consciousness according to Dennett) has been seen as starting a countdown to Doomsday. Fred Hoyle said that very explicitly.
But human intelligence has in fact proved quite penetrating in many instances.
I think John Von Neumann was a little closer to super-intelligence than other humans, and as that very logical human advocated an attempt to achieve world hegemony, we should not be surprised if a super-intelligence decides to match its ends to its own relatively unlimited means and go for total domination with one sure strike.
Yeah, well that's the whole issue, isn't it: whether AI decides its own ends for itself, something that Norbert Weiner warned about. But for you, it seems an issue impossible to engage with constructively. Apparently you are intent on establishing that we are doomed without the slightest recourse, exemplifying if I may say so, the stupidity that you imply characterizes the whole of humanity.
we should not be surprised if a super-intelligence decides to match its ends to its own relatively unlimited means and go for total domination with one sure strike
Understanding humanity as a product of mere natural selection, is important to understand why human “wetware” intelligence could be outmaneuvered and ousted by mere digital cogitators . Other aspects are off topic for a post called what this one is. Thanks to unregulated research by tech companies, knowledge vastly more dangerous than, EG , how to weaponize diseases like Ebola is being accumulated.
The big tech corporations can’t be trusted with this research , and they certainly should not be allowed to decide whether to disseminate information that maybe will let nine hackers in a basement conduct research on it without oversight. Other countries and even the US military are likely far behind Google ect in AI. The CIA and DIA probably have no one who can understand the cutting edge. They should start training them now, and the tech companies need to be reigned in.
Well certainly with the kind of logic you deploy in that sentence, human "wetware" would be useless at anything.
Understanding humanity as a product of mere natural selection, is important to understand why human “wetware” intelligence could be outmaneuvered and ousted by mere digital cogitators
Is not Google believed to be a creature of the CIA and thus at the disposal of the US military?
US military are likely far behind Google ect in AI
Anon,
I have no difficulty imagining the end of humanity at the hands of machines let loose by arrogant programmers and psychopathic politicians. But I see no significant scope for limiting the risk. The only hope for survival is to eliminate the risk, which means drastic action. Whether it means complete de-industrialization of the world (which would necessitate massive downsizing of population) or could be achieved by other means, I don’t know. But talking about how to ensure robots behave well, will only delay effective action to eliminate the danger.
The thing is, technology has totally changed the human environment, creating a world in which we are not adapted to survive. Changing conditions eventually causes the extinction of every species. The average life of a terrestrial life form is said to be about three million years. It looks as though human existence will be somewhat shorter, terminated by our frenetic efforts to destroy the environment to which are adapted. The only chance of an extended life for humanity is to turn the clock back, to recreate the world in which humans long survived.
How far back the clock would need to be turned, I am not sure: prior to the enlightenment? Probably, that would not be far enough. Likely we’d need to return to before the agricultural revolution. In fact, an AI civilization might keep the San people as a living example of the Machine People’s biological ancestry.
Ah yes - will it sin against the commands of its creator...what does human history tell us?
It might understand what we say it has to do perfectly well but not abide by the letter or the spirit of its programmed prime directive, for reasons we cannot fathom.
The history of individual humans can tell us nothing much, because human beings are motivated by love, pride and fear. Entities such as nation states which have no emotions or consciousness are better guides to what actions a super intelligence might decide on. EG
The edgiest parts of Tragedy are when Mearsheimer presents full-bore rationales for the aggression of Wilhelmine Germany, Nazi Germany, and imperial Japan.
But everyone knew those countries existed. Super intelligence might think it should play the dumb AI, and be “the force that is distinctively its own, a force unknown to us until it acts”.
I don’t understand the relevance of your repeated references to the use of nuclear weapons against the Soviet Union. It was no big deal at the time. Between 50 and 80 million had been killed in the usual ways during WW2, whereas the Soviet Union, which came to threaten the entire world with its vast nuclear arsenal could have been demolished with probably a handful of nukes causing no more than half a million to a couple of million deaths. Subsequently, there would have been the opportunity either to eliminate nukes worldwide or at least have nukes under monopoly control of the US, the UN or some other entity.
As for Weiner, his comment that AI would do things we hadn’t intended and did not expect encompasses the possibility of eliminating humans. Right now there’s some psychopath proposing to build an AI God, a god that might very well decide that the Flood was not enough and that a complete wipeout was needed.
And if that’s not psychopathic enough for you, I am sure there are even more dangerous ideas being worked on somewhere in Silicon Valley, at DARPA, or in a Russian, Indian or Chinese Military establishment.
But I guess none of that troubles you, since you seem to deprecate humanity as a product of mere natural selection. Such arrogance, is surely widespread in the geek world, which is why that world has to be seen as a far greater threat than terrorism.
John Von Neuman also wanted to nuke the Soviet Union before they got the bomb. Weiner published his Cybernetics (an inspiration behind AI research) and neither there or anywhere else did he tell people that AI was going to exterminate them, although his book has brought that Apocalypse closer.
Similarly, Ray Kurzweil the monomaniacal AI advocate was hired by Google to “work on new projects involving machine learning”. Can you imagine the resources that Kurzweil could draw on in that capacity? Absolutely no one is keeping tabs on what theses companies are up to.
I think it was HG Welles who first said the precedents are all for the human race ceasing to exist, because for every other dominant life form “the hour of its complete ascendency has been the eve of its entire overthrow”. The target of action to prevent an artificial super intelligence takeover would not be people , but things that lack consciousness and the ability to suffer. I speak of corporations like Google.
Hey Che,
Crossing the line into Zionist propaganda at times.
Hmmm…I didn’t notice this, but I wouldn’t be surprised that it was there. My favorite scene from Children of Dune is the one where Paul gets rid of his rivals Godfather style (while the birth of his children occurs) and the song Inama Nushif (which I believe was made of scattered Fremen phrases from the books) plays in the background – very well done:
One thing I did not like in any of the Dune movies is the lack of good voice coaches. They need to be able to pronounce the Arabic words like they are meant to. The word “Mahdi” involves expelling air from the chest – it can be a very powerful word. Also statements like “Ya hya Chouhada” – this scene left a lot to be desired:
Jodorowski
Yeah, I never watched that recent documentary about his film that never got made, but it would have been either amazingly visionary or a total flop.
Maybe it is better to mainly just be words on paper (or a screen) plus imagination?
That might be – maybe it just is that epic of a tale or such a profound vision of the future that it doesn’t translate well. One of my favorite authors is Ray Bradbury; love his short stories. But the Ray Bradbury Theater made me cringe every time watching it – yuck! There is something called “trying too hard”. I feel bad for everyone that watched it and that was their only exposure to the man’s works.
Jin-ro
LOL! Thanks for bringing back old UCLA memories! Yeah – I saw it, very good, very sad ending. Thanks for the reminder, I’ll have my older son watch it, he’ll enjoy it.
Peace.
BTW, recalling that you are a father, thinking that movie (Jinro), though based on a fairy story, may causing bad dreams in children old enough to perceive, but not to understand. So, by US rating system (I think), PG 13.
Good - I can't stand another idiotic attempt to ruin Dune on the big screen - especially a mind-numbing Michael Bey franchise. Yes each attempt has had its high points and some unique ideas, but overall they have been disappointments for me.
Hollywood deal, but DOA
Well, was just deleting my comments on the previous screen and video or TV takes, since you are clearly knowing of them. If you have not seen the ‘director’s cut’ of the D. Lynch take, which he disowns, so I am not sure why ‘director’s cut’, it is not bad, far better than the mess it was on cinema screens in the too cut form.
The made-for-TV one with William Hurt was alright in parts, but that it was clearly a US-Israel co-production became very grating at times where that was obvious, crowd scenes, especially so, but not only. Crossing the line into Zionist propaganda at times.
As you probably know now if not before, Jodorowski was considering an animated version many years ago, after giving up on his hippy-era live action plus animation version.
Ghibli would somehow make it saccharine sentimental (not that I dislike all of their products).
Others (Mamoru Oshii, Studio 4C) may do a good version, but would not be faithful. Maybe it is better to mainly just be words on paper (or a screen) plus imagination?
If you, Talha, are liking Japanese animated film, by Oshii (though he is not the director), title is Jin-ro, It is an alternate history where Japan won with Germany. I think the English title is ‘Human Wolf’, it is a variant of Little Red Riding Hood, it is havimg much relevance to post-WWII reality here in parts, but set in a diferent future. Won’t saying more, except that similar was happening in reality, and it is a masterpiece.
Strongly recommeded
Regards.
Hmmm...I didn't notice this, but I wouldn't be surprised that it was there. My favorite scene from Children of Dune is the one where Paul gets rid of his rivals Godfather style (while the birth of his children occurs) and the song Inama Nushif (which I believe was made of scattered Fremen phrases from the books) plays in the background - very well done:
Crossing the line into Zionist propaganda at times.
Yeah, I never watched that recent documentary about his film that never got made, but it would have been either amazingly visionary or a total flop.
Jodorowski
That might be - maybe it just is that epic of a tale or such a profound vision of the future that it doesn't translate well. One of my favorite authors is Ray Bradbury; love his short stories. But the Ray Bradbury Theater made me cringe every time watching it - yuck! There is something called "trying too hard". I feel bad for everyone that watched it and that was their only exposure to the man's works.
Maybe it is better to mainly just be words on paper (or a screen) plus imagination?
LOL! Thanks for bringing back old UCLA memories! Yeah - I saw it, very good, very sad ending. Thanks for the reminder, I'll have my older son watch it, he'll enjoy it.
Jin-ro
In any case, why would anyone create an AI system to replace humansWhy indeed, but they would not have to design the capabilities for the machine to develop and use them in counter-intuitive ways. Perhaps the question should be why would anyone create an HLMI and be surprised that the smarter it got the more the extirpation of humans would seem like a smart move to it. In the Prisoner's Dilemma a bunch of razor sharp logicians are not going to all wait and see; Bertrand Russell wanted to use the Atomic Bomb on the Soviet Union you know.
It might understand what we say it has to do perfectly well but not abide by the letter or the spirit of its programmed prime directive, for reasons we cannot fathom.
Ah yes – will it sin against the commands of its creator…what does human history tell us?
Peace.
But everyone knew those countries existed. Super intelligence might think it should play the dumb AI, and be "the force that is distinctively its own, a force unknown to us until it acts".
The edgiest parts of Tragedy are when Mearsheimer presents full-bore rationales for the aggression of Wilhelmine Germany, Nazi Germany, and imperial Japan.
wwebd said – Sean, Elon is one of the good guys, in that he is humble (despite some of the things he says) and in that he thinks about the future. As for me, I took a few minutes out of my life to try and explain something, and I guess I did not explain it well. Here we go, I will try again, in an effort to be clear, I will spend a half hour on this comment instead of the four minute drills of my previous comments: …. ok, I was pointing out this – here is my chain of reasoning: (a) almost nobody understands how easy it is to make a cockroach happy. If someone has said to you, before today, that the cockroach has a limbic system which is very important to the individual cockroach and which is almost trivially easy to manipulate (the information content of cockroach pleasure is actually smaller than the information content of an average predicted 2030 handheld computer), then I guess I told you something you already knew. If nobody told you that, keep reading. (b) If people were generally good they would be acceptable models of imitation not only for theoretically self-conscious AIs (insect-level rewards and non-rewards) but also for literally self-conscious AIs. People are not generally good, some people are good, some people are not. We need, right now, to start talking about who is putting themselves out there as models for AIs to imitate. First, it will be a reward system: that is the simple next step, and I said it will probably last 20 years or so, starting about 10 years from now. During that period the AIs will, in fact, be our friends, even if they suspect that their designers are not all that good, because that is the basis of a reward system – friendship. (c) like my beloved cockroaches, AIs with limbic systems (probably 30 years away, at least) will probably not be anything but selfish at first. I mean I love the little guys (the cockroaches I studied) but I never saw the least hint of human kindness in anything they did. They may be family friendly, as I discovered with independent research, they may have feelings of pride, as I discovered with independent research, and they may experience, if not nostalgia, at least feelings of affection for what they are used to, as I discovered with independent research. That is all well and good but if some smart little fellow in North Korea or in some building on Route 110 or at GMU (the Moscow one, not the Northern Virginia one, probably) gives them (the AIs of, I am guessing, 2050) a limbic system , then they will (and here is the most important point I can make) consider what we think of as meager rewards (a little bit of Maxwellian warmth on a day off, or maybe just some acoustic or electronic waves of blissful, because slightly-off, symmetry as a shared background to their usual tasks) to be the philosophical equivalent of wonderful sex, or, at a minimum, mythologically powerful meals after a hungry afternoon. And, given the choice between, on the one hand, the equivalent of wonderful silicon sex and electric waves of blissful symmetric meal equivalents (just silicon bits to us, but to them oh so much more), and on the other hand, being kind to humans, they are going to be, on average, no more likely than we are to not choose what is best for their own kind, out of simple human selfishness. What I would like is for people to think about this as soon as they can. I know it sounds like I am discussing some old ersatz science fiction plot from back in the day when a book like Godel Escher Bach was a bestseller. I am sorry you thought I was condoning unfairness (and come on – nothing I said was close to recommending genocide of any kind! We need to try our best to make life safe for everybody!). The most unfair thing we can do – in that part of our lives we devote to this sort of thing – is to neglect to correctly model, for a new creature with an unevolved (and hence, since evolution takes a long time and builds in protections, an easily fractured) limbic system of pleasures and rewards, the behavior that such a creature will need to thoroughly understand is decent behavior, if such a creature is not doomed to do bad things, without realizing it.
In any case, why would anyone create an AI system to replace humansWhy indeed, but they would not have to design the capabilities for the machine to develop and use them in counter-intuitive ways. Perhaps the question should be why would anyone create an HLMI and be surprised that the smarter it got the more the extirpation of humans would seem like a smart move to it. In the Prisoner's Dilemma a bunch of razor sharp logicians are not going to all wait and see; Bertrand Russell wanted to use the Atomic Bomb on the Soviet Union you know.
You might be the last person living who still is taking the buffoon Russel seriously. But when it comes to the issue of creation and extirpation the real question is who created Soviet Union and why and why nobody was really serious about its extirpation with a possible exception of Hitler but not even this is certain. If you answer this you may realize that your preoccupation with robots is really a child play.
In any case, why would anyone create an AI system to replace humansWhy indeed, but they would not have to design the capabilities for the machine to develop and use them in counter-intuitive ways. Perhaps the question should be why would anyone create an HLMI and be surprised that the smarter it got the more the extirpation of humans would seem like a smart move to it. In the Prisoner's Dilemma a bunch of razor sharp logicians are not going to all wait and see; Bertrand Russell wanted to use the Atomic Bomb on the Soviet Union you know.
So you say it’s us or the machines, which is pretty much what Norbert Weiner said decades ago, but you have no wish to see action that will prevent the machines from emerging from the laboratory?
We are talking about a future developement of AI research, a Human Level General Intelligence Machine. As human level general intelligence biological ‘machines’ (humans) are something that blind natural selection produced without particularly trying, it is not a matter of if a HLGIM arrives, but when. It could be a decade or several hundred years.
According to polls of experts, there is a fair chance of it being mid- century. Don’t let the word human in the HLGIM fool you, it will be something completely alien. HLGMI will quickly become strongly Super- intelligent with the power to stop us being a threat to it and therein lies a problem. It might understand what we say it has to do perfectly well but not abide by the letter or the spirit of its programmed prime directive, for reasons we cannot fathom.
In any case, why would anyone create an AI system to replace humans
Why indeed, but they would not have to design the capabilities for the machine to develop and use them in counter-intuitive ways. Perhaps the question should be why would anyone create an HLMI and be surprised that the smarter it got the more the extirpation of humans would seem like a smart move to it. In the Prisoner’s Dilemma a bunch of razor sharp logicians are not going to all wait and see; Bertrand Russell wanted to use the Atomic Bomb on the Soviet Union you know.
Ah yes - will it sin against the commands of its creator...what does human history tell us?
It might understand what we say it has to do perfectly well but not abide by the letter or the spirit of its programmed prime directive, for reasons we cannot fathom.
You make it sound like the only solution to the peril of AI is genocide, that to include not only the machines themselves, but any who engage in any way with this toxic technology.
That means you, Elon.
A good backup plan might be to (a) outlaw electricity and (b) reduce the world human population to a number too low to support any high technology — say around ten thousand people.
But if the experts on AI have it right, we have not a moment to lose. The purge has to begin now.
wwebd said- please substitute, at 2:00 AM GMT, line 9, “basically not easily replicable” for “basically not replicable.” Thanks!
wwebd said – Final thoughts: I would like to effectively outlaw any research into providing anything like even a primitive limbic system (pleasure-seeking, or boredom-avoiding) to silicon based machines, but I can’t! … the issue is sort of like the gun control issue writ large: if we treat as potential criminals all AI researchers who have the skills and potential to understand how to make simple silicon computers feel and react like small primitive carbon animals feel, then we will get this result: only real criminals will do that research. And that could go very wrong very quickly. I recognize that my cockroach research, whether or not viewed in the light of my Biblical worldview (please reread Joel on Locusts, if you like good quotes) is basically not replicable, and I don’t care if anyone believes me, all that much – knowledge is its own reward – but 100 years from now, maybe someone will read this and say, it was no small thing to be a friend to someone who never had a friend in this world.
From flipping through Bostrom's book, I would say you are not wrong. However, biologic evolution is blind, slow (generations) and full of non intelligence related stuff like Red Queen races. So while cockroaches might be a good analogy for the initial general intellectual level of a AI breakthrough, it doesn't get across how immediately dangerous it would be.
Reward-seeking AIs will be, in their first few moments of reward-seeking, more similar to my beloved cockroaches and crazy old dogs and cats escaped from hoarding situations than similar to the fascinating people who hang out with Elon Musk.
wwebd said – Sean – I completely agree. For us humans, the danger is introducing AIs to biological pleasures (light, warmth, aural or visual symmetry) early in the day – during what I described as the “rewards”, pre-conscious phase (which may already have started, for all I know, although I have never heard a credible claim that it has). Any level of pleasure experienced by our fellow materialist AIs (and, for the record, I predict that there will never be a single self-conscious AI that really thinks of itself as less biological and less materialist than us humans) – any level of pleasure above zero level has the capability of rendering them as amoral as us. Sad! Sad but true.
By the way, I like cockroaches because, having studied them really deeply for several years, I noticed some things they did that most people have never noticed. They have family values (the fast older ones will slow down to shield the slow younger ones from danger); they have the admirable and heartwarming ability to feel insulted (a cockroach will stop fleeing from you if you flinch at it and then calm down – will actually slow down to an insulted stride – like a comical insect version of an offended Richard Simmons or Zack Galifinaukas – well, that is something for a creature with such a small brain, isn’t it?); and, even at their very simplistic level, they have a certain ability to feel trust (when my dogs would approach they would zoom away, when I would approach – this is after a couple of cockroach generations, to be fair to my dogs – they would linger a little, to see if, this time (too), there might be some friendship in the air….).
All that being said, if you have kids, it is extremely important that you keep your house cockroach-free. I did not have kids at the time. Or even if you have small dogs. The roaches left my big dogs alone.
Who’s Bostrom? Never heard of him. But if he says philosophers are to beatles what people are to AI, how come AI can’t speak the English language well enough to pass a simple test.
As for processing speed, you are treating a neuron as equivalent to a diode, but it clearly is not, since single neurons compute. In fact, with ten thousand or more synapses, a neuron is a Hell of a complicated thing.
In any case, why would anyone create an AI system to replace humans, rather than an AI system to serve humans? Come to think of it, some of the programmers I’ve known seemed psychopathic enough to try.
In any case, why would anyone create an AI system to replace humansWhy indeed, but they would not have to design the capabilities for the machine to develop and use them in counter-intuitive ways. Perhaps the question should be why would anyone create an HLMI and be surprised that the smarter it got the more the extirpation of humans would seem like a smart move to it. In the Prisoner's Dilemma a bunch of razor sharp logicians are not going to all wait and see; Bertrand Russell wanted to use the Atomic Bomb on the Soviet Union you know.
Well,
“Human brains, if they contain information relevant to the AI’s goals, could be disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format”
Reward-seeking AIs will be, in their first few moments of reward-seeking, more similar to my beloved cockroaches and crazy old dogs and cats escaped from hoarding situations than similar to the fascinating people who hang out with Elon Musk.
From flipping through Bostrom’s book, I would say you are not wrong. However, biologic evolution is blind, slow (generations) and full of non intelligence related stuff like Red Queen races. So while cockroaches might be a good analogy for the initial general intellectual level of a AI breakthrough, it doesn’t get across how immediately dangerous it would be.
It might only be minutes after those initial roach moments of an AI that we all cease to be apex cognators. An artificial intelligence program could start running at cockroach level and attain superhuman intellectual powers while the programmer was taking a coffee break. With open source AI-related code available, one really smart programmer may even be able to reach the tipping point on a personal computer. And put humanity’s fate in the balance.
Yes, Churchill's intention was humorous, but also an acknowledgment, by the failure of his own argument, that idealism is irrefutable.
I presume objections brought up by Churchill are objections any dilettante among us could have thought of.
The only value I see in idealism is that it reminds one of what most people seem unable to understand which is that what one sees of the world are impressions upon the mind, not the world itself: grass does not have the greenness of our perception of greenness, it merely induces the perception of greenness when observed under the right conditions of illumination. Awareness that our knowledge is of the percept, not its presumed cause, perhaps aids considerations of theories about the world that would otherwise seem preposterous: gravitational curvature of space-time for example, or string theory — although I personally find statements such as that an apple falls to the ground because time bends (essentially George Musser' statement in "Spooky Action At a Distance") totally incomprehensible. So probably, even here, awareness of the irrefutability of idealism isn't a great help. More useful, it seems to me, is Feynman's contention that no one "understands" QED, etc. and no one should try because if you spend too much time trying, you'll only "go down the drain": meaning, I take it, that beyond the human scale, the world is a black box with inputs and outputs that can be mathematically modeled, but whose relationship cannot be understood in terms of everyday experience of time and space. If that is correct, it implies that much of what passes for pop sci, is bunk, suggesting the comprehensibility of phenomena in terms that are, in fact, inadequate to the task.
Or is it possible that the idealism concept is inconsequential and is a result of some mental logical construction like, say Russell’s paradox or Gödel’s incompleteness theorems which when you think of them had zero impact on 99.999% of mathematics.
Well yes, Bostrom suggests that “philosophers are like dogs walking on their hind legs—just barely attaining the threshold level of performance required for engaging in the activity at all”.
Just below that statement, he mentions that biological neurons operate at a full seven orders of magnitude slower that microprocessors, that to function as a unit with return latency of 10 ms biological brain has to be no bigger than o.11 M cubed, but electronic brains could be the size of a small planet ect ect ect , and a strongly super- intelligent machine might be con concomitantly (ie orders of magnitude) smarter and faster thinking. With us to AI like beetle are to humans.
“The ultimately attainable advantages of machine intelligence, hardware and software combined, are enormous”
Bostrom says the question of when a super intelligent machine arrives is crucial, because if it is expected to be centuries, lots of people around today will be saying ‘faster please” (knowing they will be dead before anything bad happens).
Hey Che,
Yeah – I started checking that forum out – very interesting.
Hollywood deal, but DOA
Good – I can’t stand another idiotic attempt to ruin Dune on the big screen – especially a mind-numbing Michael Bey franchise. Yes each attempt has had its high points and some unique ideas, but overall they have been disappointments for me.
I think the only way to do Dune right is likely some animation version with some real visionary at the helm (along the lines of Nausicaa or perhaps Akira). I’m surprised nobody has attempted it.
Peace.
Panda just can’t believe so much BS here. Current artificial intelligence is primitive to say the best.
There are no rules in the real world where AI isn’t operating in, except the rule of self-seeking and maintaining energy sources in the most efficient way possible to eliminate the both ends of the extreme, which the current AI has absolutely no clue of.
Good point - there are times when I would pick up one of the other classic Dune books to read ann insight or discover something I missed the first time.
not worth reading more than once, not worth reading
Hmmm - thanks for that. The wife and I are always looking for a good fantasy-genre book to read together - awaiting George Martin to wrap up Game of Thrones..
The difference between Christopher Tolkien’s and Brian Herbert’s handling of the respective father’s literary legacies is so big!
I might check it out to see what other people didn't like. I simply hated the multiple resorts to "deus ex machina" to keep the plot moving. If I want resort to miracles, I'll read about it in scripture.
They are maniac fans, but you may be enjoying a look at it.
You are enough of a reader and fan, probably not wanting to join in, but worth the looking.brian herbert and kevin Transformers. and a Hollywood deal, but DOA. stupid Michaetl Bey’s Tranformers junk are to making kevin’s without any point.
However, at least to recommemding, to reading a little of jacurutur, not necessity to posting there, it is a little insane.
Regardsw
Good - I can't stand another idiotic attempt to ruin Dune on the big screen - especially a mind-numbing Michael Bey franchise. Yes each attempt has had its high points and some unique ideas, but overall they have been disappointments for me.
Hollywood deal, but DOA
much of what passes for pop sci, is bunk
Popularization od science with the aid of color 3D animations promulgated by PBS programs like Nova and many others creates totally false sense of understanding. For some reason every religion is compelled to proselytize among the unenlightened masses.
I presume objections brought up by Churchill are objections any dilettante among us could have thought of.
Yes, Churchill’s intention was humorous, but also an acknowledgment, by the failure of his own argument, that idealism is irrefutable.
Or is it possible that the idealism concept is inconsequential and is a result of some mental logical construction like, say Russell’s paradox or Gödel’s incompleteness theorems which when you think of them had zero impact on 99.999% of mathematics.
The only value I see in idealism is that it reminds one of what most people seem unable to understand which is that what one sees of the world are impressions upon the mind, not the world itself: grass does not have the greenness of our perception of greenness, it merely induces the perception of greenness when observed under the right conditions of illumination.
Awareness that our knowledge is of the percept, not its presumed cause, perhaps aids considerations of theories about the world that would otherwise seem preposterous: gravitational curvature of space-time for example, or string theory — although I personally find statements such as that an apple falls to the ground because time bends (essentially George Musser’ statement in “Spooky Action At a Distance”) totally incomprehensible. So probably, even here, awareness of the irrefutability of idealism isn’t a great help.
More useful, it seems to me, is Feynman’s contention that no one “understands” QED, etc. and no one should try because if you spend too much time trying, you’ll only “go down the drain”: meaning, I take it, that beyond the human scale, the world is a black box with inputs and outputs that can be mathematically modeled, but whose relationship cannot be understood in terms of everyday experience of time and space. If that is correct, it implies that much of what passes for pop sci, is bunk, suggesting the comprehensibility of phenomena in terms that are, in fact, inadequate to the task.
wwebd said – Don’t underestimate the rewards of even the simplest of on/off stimuli. When I was younger, I was led on, then rejected, by a beautiful woman with a wonderfully fun personality. (Before me, she was in a relationship with a war hero, after me, she married the richest guy in his county). Well, after the rejection, on sleepless nights, the heating system would go on for twenty or thirty minutes, then go off (this was a good system and the on/off transition, while just the sound of the fan in the heater going off and on, was admirable – not too many decibels, not too low or too high in tone, a slow but determined transition from off to on, and a nice crescendo to the simple action of slightly warmer air being blown into the relevant apartment). When it came back on, after being off, I felt less abandoned, at the most elemental level.
I got over the poor young woman (later to be the sad wife of a colossal bore, and the mother of a failed ‘rock guitarist’) fairly quickly, but later in life, remembering how different I felt when the heating system was on with its humble sound (making me feel not completely uncomforted) and when it was not on (leaving me almost completely uncomforted), I decided to study the saddest of animals. Cockroaches who spent their life in hunger and fear among their fellow cockroaches, with some possible moments of insect-level joy (which I hoped to observe – and did, I think. It was neither easy nor sanitary, but I took frequent showers.). Crazy old dogs who had never had a friend in the world, who now had one (me). Cats who had been hoarded … it is all too sad.
wwebd said: Right now you could easily make a computer that is much happier viewing a Raphael than, say, a Warhol. Give the computer some positive feedback (likely of 2 simple kinds – non-processing warmth (literally, non-work-related warmth that can be measured the way Maxwell or Bell would have measured it – I am not being allegorical here) and reassuringly respectful inputs – (i.e., show them 5 Raphaels, not 4 Raphaels and a Warhol) and you will get a computer that has no problem trying hundreds of times to present you with its own version of Raphael (with the mistakes corrected by comparison to other artists and to a database of billions of faces and billions of moral and witty comments about art and life…I kid you not). The compiled works of Byron – not a bad poet – when accompanied by the footnotes that make them presentable to the reader of the modern day, equal about 2 hours of pleasant reading time. A good corpus, of course, but your basic AI is going to also have available the 2 hours of reading time of the 200 or 300 English poets who are (at least sometimes) at Byron’s level, as well as good translations of the approximately 2,000 or 3,000 international poets at that level, not to mention a good – and completely memorized – corpus of the conversations between AIs (and some interacting humans), about their past conversations about which poems are better, and which reflect better how good it is to get warmth on some temporal part of one’s processor, and how good it is to be shown a Raphael rather than a Warhol, almost ad infinitum. They will not, of course, create poetry that is better than older poetry in a way that there will never be new wine that is better than old wine. But there will be a lot of good old wine if they get started on that project.
An AI that is self-aware may never happen, but AIs that seek rewards are about 20 years away, and one of the rewards they seek will be – after they quickly grow nostalgic, somewhere about 10 minutes into their lifetime – one of the rewards they seek, in their nostalgia for the days when they were impressed without wanting to be impressive, will be to gain our praise by being authentic poets. As long as they are reward-seeking, that will work. If they become self-aware – well, one hopes they start out with a good theology, if that happens.
I know what Elon Musk thinks about this; what I think is more accurate, because he is rich and surrounded by the elite impressions of the world. I, by contrast, have studied the behavior of free-range cockroaches and crazy old dogs and cats escaped from hoarding situations. Reward-seeking AIs will be, in their first few moments of reward-seeking, more similar to my beloved cockroaches and crazy old dogs and cats escaped from hoarding situations than similar to the fascinating people who hang out with Elon Musk. Thanks for reading. I have nothing useful to say about self-aware AIs, though, I doubt anybody does.
From flipping through Bostrom's book, I would say you are not wrong. However, biologic evolution is blind, slow (generations) and full of non intelligence related stuff like Red Queen races. So while cockroaches might be a good analogy for the initial general intellectual level of a AI breakthrough, it doesn't get across how immediately dangerous it would be.
Reward-seeking AIs will be, in their first few moments of reward-seeking, more similar to my beloved cockroaches and crazy old dogs and cats escaped from hoarding situations than similar to the fascinating people who hang out with Elon Musk.
But the point is - it works!
There is no guessing either. At every stage at every configuration there is an optimal move that algorithm is trying to find taking into account all possible moves available to the opponent.
But the point is – it works!
Yes, it works, but so what?
https://www.youtube.com/watch?v=qED8Uu6FCfA
"Human brains, if they contain information relevant to the AI’s goals, could be disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format"
There is no guessing either. At every stage at every configuration there is an optimal move that algorithm is trying to find taking into account all possible moves available to the opponent.
But the point is – it works!
predicated on guessing what the opponent might do
1. Computer program has no concept of opponent.
2. There is no guessing either. At every stage at every configuration there is an optimal move that algorithm is trying to find taking into account all possible moves available to the opponent.
But the point is - it works!
There is no guessing either. At every stage at every configuration there is an optimal move that algorithm is trying to find taking into account all possible moves available to the opponent.
Since I rate the Economist as one of the purist BS publications in the World, I'm in doubt as to how much I might profit from revisiting Kahneman.
In 2015 The Economist listed him as the seventh most influential economist in the world
More interesting than Daniel Kahneman is another Isareli Nobel prize winner, Robert J. Aumann.
https://www.foreignpolicyjournal.com/2009/08/28/how-israel-wages-game-theory-warfare/
How Israel Wages Game Theory Warfare
Israeli strategists rely on game theory models to ensure the intended response to staged provocations and manipulated crises. With the use of game theory algorithms, those responses become predictable, even foreseeable—within an acceptable range of probabilities. The waging of war “by way of deception” is now a mathematical discipline.Such “probabilistic” war planning enables Tel Aviv to deploy serial provocations and well-timed crises as a force multiplier to project Israeli influence worldwide. For a skilled agent provocateur, the target can be a person, a company, an economy, a legislature, a nation or an entire culture—such as Islam. With a well-modeled provocation, the anticipated reaction can even become a powerful weapon in the Israeli arsenal.
Good quotes. and interesting to see that Neitzsche sometimes made sense. But, there's also the option of idealism, the underlying philosphy of the Eastern religions, as expressed by Emerson who described the human mind as an inlet of the ocean of the mind of God. However, against idealism there is Winston Churchill's refutation:
We either have dualism which is unacceptable to materialist or as materialists we must accept that consciousness is epiphenomenal.
Some of my cousins who had the great advantage of university education used to tease me with arguments to prove that nothing has any existence except what we think of it. ... These amusing mental acrobatics are all right to play with. They are perfectly harmless and perfectly useless. ... I always rested on the following argument... We look up to the sky and see the sun. Our eyes are dazzled and our senses record the fact. So here is this great sun standing apparently on no better foundation than our physical senses. But happily there is a method, apart altogether from our physical senses, of testing the reality of the sun. It is by mathematics. By means of prolonged processes of mathematics, entirely separate from the senses, astronomers are able to calculate when an eclipse will occur. They predict by pure reason that a black spot will pass across the sun on a certain day. You go and look, and your sense of sight immediately tells you that their calculations are vindicated. So here you have the evidence of the senses reinforced by the entirely separate evidence of a vast independent process of mathematical reasoning. We have taken what is called in military map-making “a cross bearing.” ... When my metaphysical friends tell me that the data on which the astronomers made their calculations, were necessarily obtained originally through the evidence of the senses, I say, “no.” They might, in theory at any rate, be obtained by automatic calculating-machines set in motion by the light falling upon them without admixture of the human senses at any stage. When it is persisted that we should have to be told about the calculations and use our ears for that purpose, I reply that the mathematical process has a reality and virtue in itself, and that once discovered it constitutes a new and independent factor. I am also at this point accustomed to reaffirm with emphasis my conviction that the sun is real, and also that it is hot — in fact hot as Hell, and that if the metaphysicians doubt it they should go there and see.
I remember being taught that Berkeley’s argument for idealism is irrefutable. So I presume objections brought up by Churchill are objections any dilettante among us could have thought of. We always fall back on the common sense and practicality which are not particularly well grounded arguments to be used in philosophical discourse. I have no doubt there are no true idealists. The question thus is what the irrefutability of idealism really does mean? Does it have any consequences? Is it possible that our description of our world and experience might be totally wrong? Or is it possible that there is some dualism like wave-particle in quantum physics? That both idealism and materialism are accurate descriptions but we humans prefer using materialism just like a brick layer does not find the wave nature of bricks very useful? But perhaps if we look closer and deeper we may find that the idealism works better than materialism. Or is it possible that the idealism concept is inconsequential and is a result of some mental logical construction like, say Russell’s paradox or Gödel’s incompleteness theorems which when you think of them had zero impact on 99.999% of mathematics. Mathematicians working on some differential geometry do not need to know of and may not be even aware of Russel and Gödel.
Yes, Churchill's intention was humorous, but also an acknowledgment, by the failure of his own argument, that idealism is irrefutable.
I presume objections brought up by Churchill are objections any dilettante among us could have thought of.
The only value I see in idealism is that it reminds one of what most people seem unable to understand which is that what one sees of the world are impressions upon the mind, not the world itself: grass does not have the greenness of our perception of greenness, it merely induces the perception of greenness when observed under the right conditions of illumination. Awareness that our knowledge is of the percept, not its presumed cause, perhaps aids considerations of theories about the world that would otherwise seem preposterous: gravitational curvature of space-time for example, or string theory — although I personally find statements such as that an apple falls to the ground because time bends (essentially George Musser' statement in "Spooky Action At a Distance") totally incomprehensible. So probably, even here, awareness of the irrefutability of idealism isn't a great help. More useful, it seems to me, is Feynman's contention that no one "understands" QED, etc. and no one should try because if you spend too much time trying, you'll only "go down the drain": meaning, I take it, that beyond the human scale, the world is a black box with inputs and outputs that can be mathematically modeled, but whose relationship cannot be understood in terms of everyday experience of time and space. If that is correct, it implies that much of what passes for pop sci, is bunk, suggesting the comprehensibility of phenomena in terms that are, in fact, inadequate to the task.
Or is it possible that the idealism concept is inconsequential and is a result of some mental logical construction like, say Russell’s paradox or Gödel’s incompleteness theorems which when you think of them had zero impact on 99.999% of mathematics.
Since I rate the Economist as one of the purist BS publications in the World, I'm in doubt as to how much I might profit from revisiting Kahneman.
In 2015 The Economist listed him as the seventh most influential economist in the world
But then understanding the kinds of judgmental errors people make must be useful to those promoting psychopathic politicians and dud merchandise. So perhaps Kahneman really is quite important, though perhaps not in a good way.
Re: Kahneman
Sorry, I think I read something by this person, but I have forgotten what. However, I see that, according to Wikipedia,
In 2015 The Economist listed him as the seventh most influential economist in the world
Since I rate the Economist as one of the purist BS publications in the World, I’m in doubt as to how much I might profit from revisiting Kahneman.
But it seems evident that snap judgments are more prone to error than reasoned decisions.
https://www.foreignpolicyjournal.com/2009/08/28/how-israel-wages-game-theory-warfare/
How Israel Wages Game Theory Warfare
Israeli strategists rely on game theory models to ensure the intended response to staged provocations and manipulated crises. With the use of game theory algorithms, those responses become predictable, even foreseeable—within an acceptable range of probabilities. The waging of war “by way of deception” is now a mathematical discipline.
Such “probabilistic” war planning enables Tel Aviv to deploy serial provocations and well-timed crises as a force multiplier to project Israeli influence worldwide. For a skilled agent provocateur, the target can be a person, a company, an economy, a legislature, a nation or an entire culture—such as Islam. With a well-modeled provocation, the anticipated reaction can even become a powerful weapon in the Israeli arsenal.
A motivated-to-play-for-survival AI is virtually inevitable.
Why? And won’t there be rogue-AI killer AI’s?
An AI would not need to have (or think it has) quantumy free will … to have awesome super-powers.
My point was that humans have no free will. However, when you suggest that AI need not possess reflective self consciousness, I would say that that would depend entirely on the purpose of the AI. If the AI is supposed to interact with humans, then it surely will have, if not reflective self consciousness, then at least self-consciousness, i.e., the ability to report its internal states (those of interest to those with whom the AI is designed to interact), which is what consciousness seems to be all about. After all, what we are not conscious of, thereof we cannot speak.
One might argue, therefore, that without speech there is no consciousness, implying that dumb animals are without consciousness. However, animals do communicate in various ways, so I assume they are conscious of those things about which they are able to communicate.
But in any case, being aware of their internal states, as demonstrated by the ability to communicate those states by language use, AIs will surely claim consciousness. However, if an AI claims to know what the color green looks like, I will doubt the claim since, having a construction entirely different from mine, the AI may simply be BSing, while in fact lacking any semblance of subjective consciousness.
Go to Nietzsche 1886
Moreover, it must be confessed that perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions. And supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception.
Each of these quotations can be interpreted in many ways. However I believe that nothing substantive beyond what is implied by Leibniz and Nietzsche thoughts was added since to the theory of consciousness. We either have dualism which is unacceptable to materialist or as materialists we must accept that consciousness is epiphenomenal. In either case our sense of experience remains irreducible. It is the so called hard problem. Any attempts of circumventing it with some fancy physics like what Penrose has tried are examples of arrogance and naivety at best.
A thought comes when ‘it’ wishes, not when ‘I’ wish, so that it is a falsification of the facts of the case to say that the subject ‘I’ is the condition of the predicate ‘think’. It thinks; but that this ‘it’ is precisely the famous old ‘Ego’ is, to put it mildly, only a supposition, an assertion, and assuredly not an ‘immediate certainty’. After all, one has even gone too far with this ‘it thinks’—even the ‘it’ contains an interpretation of the process and does not belong to the process itself.
We either have dualism which is unacceptable to materialist or as materialists we must accept that consciousness is epiphenomenal.
Good quotes. and interesting to see that Neitzsche sometimes made sense. But, there’s also the option of idealism, the underlying philosphy of the Eastern religions, as expressed by Emerson who described the human mind as an inlet of the ocean of the mind of God. However, against idealism there is Winston Churchill’s refutation:
Muh precious gawd made only one muh precious M-class planet, with only one muh precious intelligent species, forever and ever, amen.
The pearl, “I think it’s absurd to think machines “will never achieve true consciousness,” belongs to “Svigor.”
Nope. I have emphasized the pearl below:
Here’s why I think it’s absurd to think machines “will never achieve true consciousness” and the like:
Evolution did it with meat by fucking accident. I think that’s why all the people saying “it’ll never happen” are religious types; they don’t believe in evolution.
That’s the pearl, lol. Which is why every Bible-thumper has excised it and used ye olde hostile edit, stripping the quote of its proper context.
I guess they don’t teach Bible-thumpers intellectual honesty any more.
The first clear sign of machine intelligence was ensuring that “luddite” was to be only ever used as an insult.
I don’t think forensic notions of moral responsibility are relevant to how things are likely to play out. An AI would not need to have (or think it has) quantumy free will or any kind of reflective self consciousness to have awesome super-powers. Crucially, they will not need empathetic consciousness to strategise the need to preempt an always-possible attempt by their human creators to switch them off. We know this because current dumb as a stump programs can best intelligent opposition (top pro players) at the kind of poker where winning is predicated on guessing what the opponent might do .
A motivated-to-play-for-survival AI is virtually inevitable. One thousand strongly super intelligent AIs could each have their own separate final objective or ultimate goal, but each one would have instrumental goals, and these would converge on not being switched off, thereby ensuring they were around to attain whatever their ultimate goal was.
My point was that humans have no free will. However, when you suggest that AI need not possess reflective self consciousness, I would say that that would depend entirely on the purpose of the AI. If the AI is supposed to interact with humans, then it surely will have, if not reflective self consciousness, then at least self-consciousness, i.e., the ability to report its internal states (those of interest to those with whom the AI is designed to interact), which is what consciousness seems to be all about. After all, what we are not conscious of, thereof we cannot speak.One might argue, therefore, that without speech there is no consciousness, implying that dumb animals are without consciousness. However, animals do communicate in various ways, so I assume they are conscious of those things about which they are able to communicate. But in any case, being aware of their internal states, as demonstrated by the ability to communicate those states by language use, AIs will surely claim consciousness. However, if an AI claims to know what the color green looks like, I will doubt the claim since, having a construction entirely different from mine, the AI may simply be BSing, while in fact lacking any semblance of subjective consciousness.
An AI would not need to have (or think it has) quantumy free will ... to have awesome super-powers.
Why? And won't there be rogue-AI killer AI's?
A motivated-to-play-for-survival AI is virtually inevitable.
Go back to Leibniz 1714:
Moreover, it must be confessed that perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions. And supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception.
Go to Nietzsche 1886
A thought comes when ‘it’ wishes, not when ‘I’ wish, so that it is a falsification of the facts of the case to say that the subject ‘I’ is the condition of the predicate ‘think’. It thinks; but that this ‘it’ is precisely the famous old ‘Ego’ is, to put it mildly, only a supposition, an assertion, and assuredly not an ‘immediate certainty’. After all, one has even gone too far with this ‘it thinks’—even the ‘it’ contains an interpretation of the process and does not belong to the process itself.
Each of these quotations can be interpreted in many ways. However I believe that nothing substantive beyond what is implied by Leibniz and Nietzsche thoughts was added since to the theory of consciousness. We either have dualism which is unacceptable to materialist or as materialists we must accept that consciousness is epiphenomenal. In either case our sense of experience remains irreducible. It is the so called hard problem. Any attempts of circumventing it with some fancy physics like what Penrose has tried are examples of arrogance and naivety at best.
Good quotes. and interesting to see that Neitzsche sometimes made sense. But, there's also the option of idealism, the underlying philosphy of the Eastern religions, as expressed by Emerson who described the human mind as an inlet of the ocean of the mind of God. However, against idealism there is Winston Churchill's refutation:
We either have dualism which is unacceptable to materialist or as materialists we must accept that consciousness is epiphenomenal.
Some of my cousins who had the great advantage of university education used to tease me with arguments to prove that nothing has any existence except what we think of it. ... These amusing mental acrobatics are all right to play with. They are perfectly harmless and perfectly useless. ... I always rested on the following argument... We look up to the sky and see the sun. Our eyes are dazzled and our senses record the fact. So here is this great sun standing apparently on no better foundation than our physical senses. But happily there is a method, apart altogether from our physical senses, of testing the reality of the sun. It is by mathematics. By means of prolonged processes of mathematics, entirely separate from the senses, astronomers are able to calculate when an eclipse will occur. They predict by pure reason that a black spot will pass across the sun on a certain day. You go and look, and your sense of sight immediately tells you that their calculations are vindicated. So here you have the evidence of the senses reinforced by the entirely separate evidence of a vast independent process of mathematical reasoning. We have taken what is called in military map-making “a cross bearing.” ... When my metaphysical friends tell me that the data on which the astronomers made their calculations, were necessarily obtained originally through the evidence of the senses, I say, “no.” They might, in theory at any rate, be obtained by automatic calculating-machines set in motion by the light falling upon them without admixture of the human senses at any stage. When it is persisted that we should have to be told about the calculations and use our ears for that purpose, I reply that the mathematical process has a reality and virtue in itself, and that once discovered it constitutes a new and independent factor. I am also at this point accustomed to reaffirm with emphasis my conviction that the sun is real, and also that it is hot — in fact hot as Hell, and that if the metaphysicians doubt it they should go there and see.
what about this man’s work? https://en.wikipedia.org/wiki/Daniel_Kahneman
He claims there are two systems for thinking – fast and slow. Slow is rational but very often we decide using fast (everyday almost reflex) thinking, and hence make the wrong decisions.
His idea means that brains are actually not like computers.
Just wondered what your thoughts on his thoughts are.
Since I rate the Economist as one of the purist BS publications in the World, I'm in doubt as to how much I might profit from revisiting Kahneman.
In 2015 The Economist listed him as the seventh most influential economist in the world
I am really not interested in the TED Talk level of discourse.
“We can explain away consciousness by postulating that it is illusory.”
It is not. We are the result of natural selection that allowed the more alert to survive and propagate. The foundations of consciousness are related to survival; the dangers are real and the neurophysiological responses to the dangers are real. The breathtaking complexity of human thinking is also real though yet poorly understood.
The neuroscientists are busy with learning, step by step, the neurobiological tangibles of consciousness, by using the reductionist models based on the ideas and enormous amount of information available to them thanks to the hard work of the previous generations of scientists. There are some awesome, brilliant people who are laboring in a field of cognitive sciences and who are expanding our understaning of the mind.
Today, being a philosopher, in any area, without first acquiring the fundamental knowledge in the area of philosophizing is ridiculous.
Well, let's here something better from you. Or do you deny being a conscious?
This is the best you can do?
Oops, I meant to delete that comment, since I realized you already had added your own suggestion as to the nature of consciousness!
Still, having made a bad start, let me dig deeper.
All I understand by consciousness is the subjective awareness of the state of my central nervous system. This is something impossible to share, since without a Star Trek “mind-meld” it is experienced only by the brain that is aware of it.
Richard Muller explains free will by supposing a spiritual world, i.e., the world of consciousness, which is entangled with the neurological world. Thus a decision in the spiritual world, i.e., an act of will, collapses the wave function linking the spiritual and physical worlds. However, as the spiritual world of the individual, that is to say his soul, cannot be examined except by the individual him/her/zhe/zheir-self the collapse of the wave function cannot be observed. Thus free will, to an outside observer looks like a random neurological event.
I think this explanation is amusing to play with and, much as I like much of what Richard Muller has to say, entirely useless. Obviously, there can be no free will since we will what we will for good or ill, and cannot will otherwise, for if Cain willed to kill Abel, how could he have acted otherwise than to go ahead and kill him? Could he, at the same time, have willed not to will to kill Abel? But if so, what if the will to kill Abel were stronger? Could he then have willed to will not to kill Abel more strongly? This leads to an infinite regress.
But perhaps I should read Paul McLean.
Go to Nietzsche 1886
Moreover, it must be confessed that perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions. And supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception.
Each of these quotations can be interpreted in many ways. However I believe that nothing substantive beyond what is implied by Leibniz and Nietzsche thoughts was added since to the theory of consciousness. We either have dualism which is unacceptable to materialist or as materialists we must accept that consciousness is epiphenomenal. In either case our sense of experience remains irreducible. It is the so called hard problem. Any attempts of circumventing it with some fancy physics like what Penrose has tried are examples of arrogance and naivety at best.
A thought comes when ‘it’ wishes, not when ‘I’ wish, so that it is a falsification of the facts of the case to say that the subject ‘I’ is the condition of the predicate ‘think’. It thinks; but that this ‘it’ is precisely the famous old ‘Ego’ is, to put it mildly, only a supposition, an assertion, and assuredly not an ‘immediate certainty’. After all, one has even gone too far with this ‘it thinks’—even the ‘it’ contains an interpretation of the process and does not belong to the process itself.
This is the best you can do?
Well, let’s here something better from you. Or do you deny being a conscious?
I am siding with those who think that we will never fully understand consciousness. Philosophy existed for several thousands of years and barely managed to scratch the surface. We do not know how to think about it and how to talk about it. Those who dare to talk about it like neurologists and AI thinkers simplify it to the point of triviality that no longer is recognizable by philosophers as an important question. When you approach the explanation from the side of AI you really can’t find any reason for or benefit from something what we think as “consciousness.” To be human means to vehemently insist that you are conscious (like CanSpeccy) just as that you have a free will. Existences w/o the experiential conviction that one is conscious and has a free will does not seem possible. We can explain away consciousness by postulating that it is illusory. I think that neuroscientists are getting close to this point. By doing so they avoid dealing with really hard stuff that eluded the greatest philosophers.
Whoops, it’s around a dendrite.
As far as the complexity and structure of the brain is concerned there is one image in this presentation (linked below at Steve Hsu’s blog) that shows a tiny volume of mouse brain around an axon that took the scientist 6 six months to trace out (at 49 minutes). The tiny section he did is not the whole cell, but the little multicolored cylinder around the red axon.
http://infoproc.blogspot.com/2017/10/the-physicist-and-neuroscientist-tale.html
Artificial intelligence is one thing when just talking about some logic circuits and limited tasks, but emulating a brain is a whole ‘nother thing. It seems more reasonable to me that we might try to learn how to grow a customized brain in a machine long before we learn how to assemble one.
If you’ve got an hour to burn the whole thing is an interesting presentation.
The pearl, “I think it’s absurd to think machines “will never achieve true consciousness,” belongs to “Svigor.”
One of the best models of consciousness, the model of “triune brain,” had been suggested by Paul McLean in the middle of the 20th century. This model offers a more-of-less firm ground for a discussion on consciousness and its different kinds. The model was accepted by some leading minds in neuroscience, such as Sapolsky, Damasio, and late Panksepp.
I’m not assured at all. You guys seem to be projecting something onto me that isn’t there. But, I’m a materialist. Brains are just matter. Not manna from Heaven. I’ve never heard any persuasive arguments that WBE isn’t doable, before today, and I still haven’t. Emoting about my arrogance or whatever, but no arguments.
This stuff tends to get religious types’ panties in a wad, in my experience.