The Unz Review: An Alternative Media Selection
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media

Bookmark Toggle AllToCAdd to LibraryRemove from Library • BShow CommentNext New CommentNext New ReplyRead More
ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
AgreeDisagreeLOLTroll
These buttons register your public Agreement, Disagreement, Troll, or LOL with the selected comment. They are ONLY available to recent, frequent commenters who have saved their Name+Email using the 'Remember My Information' checkbox, and may also ONLY be used once per hour.
Ignore Commenter Follow Commenter
Current Commenter
says:

Leave a Reply -


 Remember My InformationWhy?
 Email Replies to my Comment
Submitted comments become the property of The Unz Review and may be republished elsewhere at the sole discretion of the latter
Commenters to FollowHide Excerpts
By Authors Filter?
Anatoly Karlin Andrei Martyanov Andrew Joyce Andrew Napolitano Audacious Epigone Boyd D. Cathey C.J. Hopkins Chanda Chisala Egor Kholmogorov Eric Margolis Forum Fred Reed Agnostic P-ter Godfree Roberts Guillaume Durocher Gustavo Arellano Ilana Mercer Israel Shamir James Kirkpatrick James Petras James Thompson JayMan John Derbyshire Jonathan Revusky Kevin Barrett Lance Welton Linh Dinh Michael Hudson Mike Whitney Pat Buchanan Patrick Cockburn Paul Craig Roberts Paul Gottfried Paul Kersey Peter Frost Peter Lee Philip Giraldi Razib Khan Robert Weissberg Ron Paul Ron Unz Steve Sailer The Saker Tom Engelhardt A. Graham Adam Hochschild Aedon Cassiel Ahmet Öncü Alex Graham Alexander Cockburn Alexander Hart Alfred McCoy Alison Rose Levy Alison Weir Allegra Harpootlian Amr Abozeid Anand Gopal Andre Damon Andrew Cockburn Andrew Fraser Andrew J. Bacevich Andrew S. Fischer Andy Kroll Ann Jones Anonymous Anthony DiMaggio Ariel Dorfman Arlie Russell Hochschild Arno Develay Arnold Isaacs Artem Zagorodnov Astra Taylor AudaciousEpigone Austen Layard Aviva Chomsky Ayman Fadel Barbara Ehrenreich Barbara Garson Barbara Myers Barry Lando Belle Chesler Ben Fountain Ben Freeman Beverly Gologorsky Bill Black Bill Moyers Bob Dreyfuss Bonnie Faulkner Book Brad Griffin Brenton Sanderson Brett Redmayne-Titley Brian Dew Carl Horowitz Catherine Crump Chalmers Johnson Charles Bausman Charles Goodhart Charles Wood Charlotteville Survivor Chase Madar Chris Hedges Chris Roberts Christian Appy Christopher DeGroot Chuck Spinney Coleen Rowley Colin Liddell Cooper Sterling Craig Murray Dahr Jamail Dan E. Phillips Dan Sanchez Daniel McAdams Danny Sjursen Dave Kranzler Dave Lindorff David Barsamian David Bromwich David Chibo David Gordon David Irving David Lorimer David Martin David North David Vine David Walsh David William Pear David Yorkshire Dean Baker Dennis Saffran Diana Johnstone Dilip Hiro Dirk Bezemer Eamonn Fingleton Ed Warner Edmund Connelly Eduardo Galeano Edward Curtin Ellen Cantarow Ellen Packer Ellison Lodge Eric Draitser Eric Zuesse Erik Edstrom Erika Eichelberger Erin L. Thompson Eugene Girin F. Roger Devlin Fadi Abu Shammalah Franklin Lamb Frida Berrigan Friedrich Zauner Gabriel Black Gary Corseri Gary North Gary Younge Gene Tuttle George Albert George Bogdanich George Szamuely Georgianne Nienaber Gilad Atzmon Glenn Greenwald A. Beaujean Alex B. Amnestic Arcane Asher Bb Bbartlog Ben G Birch Barlow Canton ChairmanK Chrisg Coffee Mug Darth Quixote David David B David Boxenhorn DavidB Diana Dkane DMI Dobeln Duende Dylan Ericlien Fly Gcochran Godless Grady Herrick Jake & Kara Jason Collins Jason Malloy Jason s Jeet Jemima Joel John Emerson John Quiggin JP Kele Kjmtchl Mark Martin Matoko Kusanagi Matt Matt McIntosh Michael Vassar Miko Ml Ole Piccolino Rosko Schizmatic Scorpius Suman TangoMan The Theresa Thorfinn Thrasymachus Wintz Greg Grandin Greg Johnson Gregoire Chamayou Gregory Conte Gregory Foster Gregory Hood Gregory Wilpert Guest Admin Hannah Appel Hans-Hermann Hoppe Harri Honkanen Henry Cockburn Hina Shamsi Howard Zinn Hubert Collins Hugh McInnish Hunter DeRensis Ian Fantom Ira Chernus Jack Kerwick Jack Krak Jack Rasmus Jack Ravenwood Jack Sen Jake Bowyer James Bovard James Carroll James Fulford James J. O'Meara Jane Lazarre Jared S. Baumeister Jared Taylor Jason C. Ditz Jason Kessler Jay Stanley Jeff J. Brown Jeffrey Blankfort Jeffrey St. Clair Jen Marlowe Jeremiah Goulka Jeremy Cooper Jesse Mossman JHR Writers Jim Daniel Jim Goad Jim Kavanagh JoAnn Wypijewski Joe Lauria Johannes Wahlstrom John W. Dower John Feffer John Fund John Harrison Sims John Pilger John Reid John Scales Avery John Siman John Stauber John Taylor John Titus John V. Walsh John Wear John Williams Jon Else Jonathan Alan King Jonathan Anomaly Jonathan Cook Jonathan Rooper Jonathan Schell Joseph Kishore Joseph Sobran Juan Cole Judith Coburn Julian Bradford Karel Van Wolferen Karen Greenberg Kees Van Der Pijl Kelley Vlahos Kerry Bolton Kersasp D. Shekhdar Kevin MacDonald Kevin Rothrock Kevin Zeese Kshama Sawant Laura Gottesdiener Laura Poitras Laurent Guyénot Lawrence G. Proulx Leo Hohmann Linda Preston Logical Meme Lorraine Barlett M.G. Miles Mac Deford Maidhc O Cathail Malcolm Unwell Marcus Alethia Marcus Cicero Margaret Flowers Mark Danner Mark Engler Mark Perry Mark Weber Matt Parrott Mattea Kramer Matthew Harwood Matthew Richer Matthew Stevenson Max Blumenthal Max Denken Max North Max Parry Max West Maya Schenwar Michael Gould-Wartofsky Michael Hoffman Michael Schwartz Michael T. Klare Moon Landing Skeptic Murray Polner N. Joseph Potts Nan Levinson Naomi Oreskes Nate Terani Nathan Cofnas Nathan Doyle Ned Stark Nelson Rosit Nicholas Stix Nick Kollerstrom Nick Turse Nils Van Der Vegte Noam Chomsky NOI Research Group Nomi Prins Norman Finkelstein Patrick Cleburne Patrick Cloutier Patrick Martin Patrick McDermott Paul Cochrane Paul Engler Paul Mitchell Paul Nachman Paul Nehlen Pepe Escobar Peter Bradley Peter Brimelow Peter Gemma Peter Van Buren Philip Weiss Pierre M. Sprey Pratap Chatterjee Publius Decius Mus Rajan Menon Ralph Nader Ramin Mazaheri Ramziya Zaripova Randy Shields Ray McGovern Rebecca Gordon Rebecca Solnit Rémi Tremblay Richard Hugus Richard Krushnic Richard Silverstein Rick Shenkman Rita Rozhkova Robert Baxter Robert Bonomo Robert Fisk Robert Hampton Robert Henderson Robert Lipsyte Robert Parry Robert Roth Robert S. Griffin Robert Scheer Robert Trivers Robin Eastman Abaya Roger Dooghy Ronald N. Neff Rory Fanning Ryan Dawson Sam Francis Sam Husseini Sayed Hasan Sharmini Peries Sheldon Richman Spencer Davenport Spencer Quinn Stefan Karganovic Steffen A. Woll Stephanie Savell Stephen J. Rossi Stephen J. Sniegoski Steve Fraser Steven Yates Subhankar Banerjee Susan Southard Sydney Schanberg Tanya Golash-Boza Ted Rall Theodore A. Postol Thierry Meyssan Thomas A. Fudge Thomas Dalton Thomas Frank Thomas O. Meehan Tim Shorrock Tim Weiner Tobias Langdon Todd E. Pierce Todd Gitlin Todd Miller Tom Piatak Tom Suarez Tom Sunic Tracy Rosenberg Travis LeBlanc Trevor Lynch Virginia Dare Vladimir Brovkin Vladislav Krasnov Vox Day W. Patrick Lang Walter Block Washington Watcher Wayne Allensworth William Binney William DeBuys William Hartung William J. Astore Winslow T. Wheeler Ximena Ortiz Yan Shen Zhores Medvedev
Nothing found
By Topics/Categories Filter?
2016 Election Alt Right American Media American Military American Pravda Anti-Semitism Blacks Censorship China Conspiracy Theories Crime Culture Culture/Society Donald Trump Economics Education Foreign Policy Genetics History Human Biodiversity Ideology Immigration IQ Iran Israel Israel Lobby Israel/Palestine Jews Miscellaneous Movies Neocons Obama Open Thread Political Correctness Politics Race Race/Ethnicity Russia Science Sports Syria Terrorism Ukraine United States World War II 100% Jussie Content 100% Jussie-free Content 100% Jussie-relevant Content 2008 Election 2012 Election 2012 US Elections 2018 Election 2020 Election 23andMe 365 Black 365Black 9/11 A Farewell To Alms Aarab Barghouti Abc News Abigail Marsh Abortion Abraham Lincoln Academia Acheivement Gap Achievement Gap Acting White Adam Schiff Adaptation Addiction ADL Admin Administration Admixture Adoptees Adoption Affective Empathy Affirmative Action Affordable Family Formation Afghanistan Africa African Americans African Genetics Africans Afrikaner Afrocentricism Age Age Of Malthusian Industrialism Agriculture AI AIDS Ainu AIPAC Air Force Aircraft Carriers Airlines Airports Al Jazeera Alain Soral Alan Clemmons Alan Dershowitz Alan Macfarlane Albion's Seed Alcohol Alcoholism Aldous Huxley Alexander Hamilton Alexandria Ocasio-Cortez Alexei Kudrin Alexei Navalny Ali Dawabsheh Alt Left Altruism Amazon Amazon.com America America First American Dream American Empire American History American Indians American Jews American Left American Legion American Nations American Nations American Presidents American Prisons American Revolution Amerindians Amish Amish Quotient Amnesty Amnesty International Amoral Familialism Amy Klobuchar Amygdala Anaconda Anatoly Karlin Ancestry Ancient DNA Ancient Genetics Ancient Near East Anders Breivik Andrei Nekrasov Andrew Jackson Andrew Sullivan Andrew Yang Angela Stent Anglo-Saxons Anglosphere Animal IQ Animal Rights Ann Coulter Anne Frank Annual Country Reports On Terrorism Anthropology Anti-Gentilism Anti-Vaccination Antifa Antiquity Antiracism Antisocial Behavior Antiwar Movement Anwar Al-Awlaki Ap Apartheid Apollo's Ascent Appalachia Arab Christianity Arab Spring Arabs Archaeogenetics Archaeology Archaic DNA Archaic Humans Architecture Arctic Sea Ice Melting Argentina Arkham's Razor Armenia Army Art Arthur Jensen Arthur Lichte Artificial Intelligence Arts/Letters Aryans Aryeh Lightstone Ash Carter Ashkenazi Intelligence Ashkenazi Jews Asia Asian Americans Asian Quotas Asians ASPM Assassinations Assimilation Assortative Mating Atheism Atlanta Attractiveness Australia Australian Aboriginals Austria Autism Automation Avigdor Lieberman Ayodhhya Azerbaijan Babes And Hunks Babri Masjid Baby Gap Backlash Bacterial Vaginosis Balanced Polymorphism Balkans Baltics Baltimore Riots Bangladesh Banjamin Netanyahu Banking Industry Banking System Banks Barack Obama Barbara Comstock Barbarians Baseball Baseball Statistics Bashar Al-Assad Basketball #BasketOfDeplorables Basque BBC BDS Movement Beauty Behavior Genetics Behavioral Economics Behavioral Genetics Belarus Belgium Belts Ben Cardin Ben Hodges Benedict Arnold Benjamin Cardin Benjamin Netanyahu Benny Gantz Berezovsky Bernard Henri-Levy Bernie Sanders Bernies Sanders #BernieSoWhite BICOM Big History BigPost Bilateral Relations Bilingual Education Bill 59 Bill Browder Bill Clinton Bill Gates Bill Kristol Bill Maher Bill Of Rights Billionaires Bioethics Biological Imperative Biology Birmingham Bisexuality Bitcoin BJP Black Community Black Crime Black Friday Black History Black History Month Black Lives Matter Black Muslims Black People Black People Accreditation Black Run America Black Undertow #BlackJobsMatter #BlackLiesMurder Blade Runner Blog Blogging Blogosphere Blond Hair Blood Libel Blue Eyes Bmi Boasian Anthropology boats-in-the-water bodybuilding Boeing Boers Bolshevik Revolution Bolshevik Russia Books Border Security Border Wall Borderlanders Boris Johnson Boycott Divest And Sanction Boycott Divestment And Sanctions Brahmans Brain Scans Brain Size Brain Structure Brazil Bret Stephens Brexit Brezhnev BRICs Brighter Brains Britain Brittany Watts Build The Wall Burakumin Burma Bush Bush Administration Business Byu California Californication Cambodia Cameron Russell Camp Of The Saints Campus Rape Canada #Cancel2022WorldCupinQatar Cancer Candida Albicans Capitalism Cardiovascular Disease Carlos Slim Carly Fiorina Caroline Glick Carroll Quigley Cars Carter Page Catalonia Catfight Catholic Church Catholicism Caucasus Cavaliers Cecil Rhodes Central Asia Chanda Chisala Charles Darwin Charles Krauthammer Charles Murray Charles Percy Charles Schumer Charleston Shooting Charlie Hebdo Charlottesville Checheniest Chechen Of Them All Chechens Chechnya Cherlie Hebdo Chess Chetty Chicago Chicagoization Chicken Hut Children China/America China Vietnam Chinese Chinese Communist Party Chinese Economy Chinese Evolution Chinese History Chinese IQ Chinese Language Chinese People Chris Gown Christianity Christmas Christopher Steele Chuck Hagel Chuck Schumer CIA Cinema Circumcision Civil Liberties Civil Rights Civil War Civilization CJIA Clannishness Clans Clash Of Civilizations Class Clayton County Climate Climate Change Clinton Clintons Cliodynamics clusterfake Coal Coalition Coalition Of The Fringes Coast Guard Cochran And Harpending Coen Brothers Cognitive Elitism Cognitive Empathy Cognitive Psychology Cognitive Science Cold War Colin Kaepernick Colin Woodard Collapse Party College Admission College Football Colonialism Color Revolution Columba Bush Comic Books Communism Community Reinvestment Act Computers Confederacy Confederate Flag Congress Conquistador-American Consciousness Consequences Conservatism Conservative Movement Conservatives Constitution Constitutional Theory Consumer Debt Controversial Book Convergence Core Article Cornel West Corruption Corruption Perception Index Cory Booker Counterpunch Cousin Marriage Cover Story Creationism CRIF Crimea Crimean Tatars Crimethink Crisis Crispr Crops crops-rotting-in-the-fields Cruise Missiles Crying Among The Farmland Ctrl-Left Cuba Cuckoldry Cuckservatism Cuckservative Cultural Anthropology Cultural Marxism Culture War Curfew Cut The Sh*t Guys Czech Republic DACA Daily Data Dump Dallas Shooting Damnatio Memoriae Dana Milbank Daniel Tosh Daren Acemoglu Dark Ages Darwinism Data Data Analysis Data Posts David Friedman David Frum David Hackett Fischer David Ignatius David Irving David Kramer David Lane David Moser David Petraeus Davide Piffer De Ploribus Unum Death Of The West Death Penalty Debbie Wasserman-Schultz Debt Decline And Fall Of The Roman Empire Deep South Deep State Degeneracy Democracy Democratic Party Demograhics Demographic Transition Demographics Demography Denisovans Denmark Dennis Ross Department Of Justice Deprivation Derek Harvey Detroit Development Developmental Noise Diagnostic And Statistical Manual Of Mental Disorders Dick Cheney Dienekes Diet Dinesh D'Souza Diplomacy Discrimination Disease Disney Disparate Impact Dissent Dissidence Diversity Diversity Before Diversity Diversity Pokemon Points Dmitry Medvedev DNA Dodecad Dogs Dollar Donme Don't Get Detroit-ed Dopamine Dostoevsky Down Syndrome Dreams From My Father Dresden Dress Codes Drone War Drones Drug Use Drugs DSM Duke Duterte Dylan Roof Dynasty Dysgenic E-books E. O. Wilson East Asia East Asian Exception East Asians Eastern Europe Ebola Ecology Economic Development Economic History Economic Sanctions Economic Theory Economy Ecuador Ed Miller Edward Gibbon Edward Snowden Effective Altruism Effortpost Efraim Diveroli Egor Kholmogorov Egypt Election 2008 Election 2012 Election 2016 Election 2018 Election 2020 Elections Electric Cars Elie Wiesel Eliot Cohen Eliot Engel Elites Elizabeth Holmes Elizabeth Warren Elliot Abrams Elliot Rodger Elliott Abrams Elon Musk Emigration Emil Kirkegaard Emmanuel Macron Empathy Energy England Entertainment Environment Environmentalism Epistemology Erdogan Espionage Estonia Estrogen Ethics Ethics And Morals Ethiopia Ethnic Genetic Interests Ethnic Nepotism Ethnicity EU Eugenics Eurabia Eurasia Euro Europe European Genetics European Genomics European History European Right European Union Europeans Eurozone Evolution Evolutionary Biology Evolutionary Genetics Evolutionary Genomics Evolutionary Psychology Exercise Eye Color Eyes Ezra Cohen-Watnick Face Recognition Face Shape Facebook Faces Fake News fallout False Flag Attack Family Family Matters Family Systems Fantasy Far Abroad FARA Farmers Farming Fascism FBI FDA FDD Fecundity Federal Reserve Female Homosexuality Female Sexual Response Feminism Feminists Ferguson Ferguson Shooting Fertility Fertility Fertility Rates Fethullah Gulen Feuds Fields Medals FIFA Film Finance Financial Bailout Financial Crisis Financial Debt Financial Times Finland Finn Baiting First Amendment First World War FISA Fitness Flash Mobs Flight From White Fluctuarius Argenteus Flynn Effect Food Football For Fun Forecasts Foreign Policy Foreign Service Fracking France Frankfurt School Franklin D. Roosevelt Frantz Fanon Franz Boas Freakonomics Fred Hiatt Free Speech Free Trade Free Will Freedom Of Speech Freedom French Canadians Friday Fluff Fried Chicken Friendly & Conventional Frivolty Frontlash Funny Future Futurism Game Game Of Nations Game Of Thrones Gandhi Gangs Gary Taubes Gay Germ Gay Marriage Gays/Lesbians Gaza Gemayel Clan Gen Z Gender Gender And Sexuality Gender Equality Gender Reassignment Gender Relations Gene-Culture Coevolution Genealogy General Intelligence General Social Survey Generational Gap Genes Genetic Diversity Genetic Engineering Genetic Load Genetic Pacification Genetics Of Height Genocide Genomics Gentrification Geography Geopolitics George Bush George Clooney George H. W. Bush George Patton George Soros George Tenet George W. Bush Georgia Germans Germany Gilad Atzmon Gina Haspel Gladwell Glenn Beck Global Terrorism Index Global Warming Globalism Globalization GMO God God Delusion Gold Golf Google Goths Government Government Debt Government Spending Government Surveillance Government Waste Graphs GRE Great Leap Forward Great Powers #GreatWhiteDefendantPrivilege Greece Greg Clark Greg Cochran Gregory Clark Gregory Cochran GRF Grooming Group Intelligence Group Selection GSS Guangzhou Guardian Guest Guilt Culture Gun Control Guns Guy Swan Gypsies H-1B H.R. McMaster H1-B Visas Haim Saban hair Hair Color Hair Lengthening Haiti Hajnal Line Half Sigma Halloween Hamilton: An American Musical HammerHate Hanzi Happening Happiness Harriet Tubman Harvard Harvey Weinstein Hasbara hate Hate Crimes Hate Facts Fraud Hoax Hate Hoaxes Hate Speech Hbd Hbd Chick Hbd Fallout Health Health And Medicine Health Care Healthcare Heart Disease Heart Health Hegira Height Height Privilege Helmuth Nyborg Help Henry Harpending Heredity Heritability Hexaco Hezbollah Hillary Clinton Himachal Pradesh Hindu Caste System Hispanic Crime Hispanics Hist kai Historical Genetics Historical Population Genetics History Of Science Hitler Hodgepodge Hollywood Holocaust Homicide Homicide Rate Homosexuality Houellebecq House Intelligence Committee Housing Howard Kohr Hox Hoxby HplusNRx Huawei Hubbert's Peak Huddled Masses Hug Thug Human Achievement human-capital Human Evolution Human Evolutionary Genetics Human Evolutionary Genomics Human Genetics Human Genome Human Genomics Human Rights Humor Hungary Hunt For The Great White Defendant Hunter-Gatherers Hunting Hurricane Katrina Hybridization Hypocrisy Hysteria I Love Italians I.Q. I.Q. Genomics #IBelieveInHavenMonahan Ibn Khaldun Ibo Ice T Iceland Ideas Identity Ideology And Worldview Idiocracy Igbo Ilhan Omar Illegal Immigration Ilyushin IMF Immigration immigration-policy-terminology Immigriping Imperialism Imran Awan Inbreeding Income Incompetence India India Genetics Indian Economy Indian Genetics Indian IQ Indians Individualism Indo-European Indo-Europeans Indonesia Inequality Infrastructure Intellectuals Intelligence Intelligent Design International International Affairs International Comparisons International Relations Internet Internet Research Agency Interracial Interracial Marriage Intersectionality Interviews Introgression Invade Invite In Hock Invade The World Invite The World Iosef Stalin Iosif Lazaridis Iosif Stalin Iq Iq And Wealth Iran Nuclear Agreement Iran Nuclear Program Iranian Nuclear Program Iranian Nuclear Weapons Program Iraq Iraq War Ireland IRGC Is It Good For The Jews? Is Love Colorblind ISIS ISIS. Terrorism Islam Islamic Jihad Islamic State Islamism Islamophobia Islamophobiaphobia Isolationism Israel Defense Force Israel Separation Wall Israeli Occupation Israeli Settlements Israeli Spying IT Italy It's Okay To Be White Ivanka Jack Keane Jair Bolsonaro Jake Tapper Jamaica Jamal Khashoggi James B. Watson James Clapper James Comey James Jeffrey James Mattis James Watson James Wooley Jane Mayer Janet Yellen Japan Jared Diamond Jared Kushner Jared Taylor Jason Malloy JASTA JCPOA ¡Jeb! Jeb Bush Jefferson County Jeffrey Goldberg Jennifer Rubin Jeremy Corbyn Jerrold Nadler Jerry Seinfeld Jesuits Jewish Genetics Jewish History Jewish Intellectuals JFK Assassination JFK Jr. Jill Stein Joe Cirincione Joe Lieberman John Allen John B. Watson John Bolton John Brennan John Derbyshire John Durant John F. Kennedy John Hawks John Hughes John Kasich John Kerry John McCain John McLaughlin John Mearsheimer John Tooby Jonah Goldberg Jonathan Freedland Jordan Peterson Joseph Tainter Journalism Judaism Judge George Daniels Judicial System Judith Harris Julian Assange Jussie Smollett Justice Kaboom Kalash Kamala On Her Knees Katz Kay Bailey Hutchison Keith Ellison Ken Livingstone Kenneth Marcus Kenneth Pomeranz Kennewick Man Kerry Killinger Kevin MacDonald Kevin Mitchell Kevin Williamson Khashoggi Kids Kim Jong Un Kin Selection Kinship Kkk KKKrazy Glue Of The Coalition Of The Fringes Knesset Kompromat Korea Korean War Kosovo Kremlin Clans Kris Kobach Ku Klux Klan Kurds LA Language Languages Las Vegas Massacre Late Obama Age Collapse Late Ov Latin America Latinos Latvia Law Law Laws Of Behavioral Genetics Lazy Glossophiliac Lead Poisoning Learning Lebanon Leda Cosmides Lee Kuan Yew Lenin Leonard Bernstein Lesbians Lèse-diversité LGBT Liberal Opposition Liberal Whites Liberalism Liberals Libertarianism Libertarians Libya Life life-expectancy Lifestyle Light Skin Preference Lindsay Graham Lindsey Graham Linguistics Literacy Literature Lithuania Litvinenko Living Standards Lloyd Blankfein Localism Logan's Run Longevity Loooong Books Looting Lorde Louis Farrakhan Love And Marriage Lover Boys Lyndon Johnson M Factor M.g. Machiavellianism Mad Men Madeleine Albright Madoff Magnitsky Act Mahmoud Abbas Malaysian Airlines MH17 Male Homosexuality Mall Malnutrition Malthusianism Manor Manorialism Manspreading Manufacturing Mao Zedong Maoism Map Map Posts maps Marc Faber Marco Rubio Maria Butina Marijuana Marine Le Pen mark-adomanis Mark Steyn Mark Warner Market Economy Marriage Marta Martin Luther King Marwan Barghouti Marxism Masculinity Masha Gessen Mass Shootings Massacre In Nice Mate Choice Math Mathematics Matt Forney Matthew Weiner Max Blumenthal Max Boot Mayans McCain McCain/POW McDonald's Mcdonald's 365Black Measurement Error Media Media Bias Medicine Medvedev Mega-Aggressions Megan McCain Mein Obama MEK Memorial Day Men With Gold Chains Meng Wanzhou Mental Illness Mental Traits Merciless Indian Savages Meritocracy Merkel Merkel Youth Merkel's Boner Mesolithic Mexican-American War Mexico MH 17 Michael Flynn Michael Jackson Michael Morell Michael Pompeo Michael Vick Michael Weiss Michelle Goldberg Michelle Ma Belle Michelle Obama Microaggressions Microsoft Middle Ages Middle East Migration Mike Pence Mike Pompeo Mike Signer Mikhail Khodorkovsky Militarization Military Military History Military Spending Military Technology Millionaires Milner Group Mindset Minimum Wage Minneapolis Minorities Misdreavus Missile Defense Missing The Point Mitt Romney Mixed-Race Model Minority Mohammed Bin Salman Monarchy Money Monogamy Moon Landing Hoax Moon Landings Moore's Law Moral Absolutism Moral Universalism Morality Mormonism Mormons Mortality Mortgage Moscow Mossad Moxie MTDNA Mulatto Elite Multiculturalism Multiregionalism Music Muslim Muslim Ban Muslims Mussolini Mutual Assured Destruction Myanmar NAEP NAMs Nancy Pelosi Nancy Segal Narendra Modi NASA Natalism Nation Of Islam National Assessment Of Educational Progress National Question National Review National Security State National Security Strategy National Wealth Nationalism Native Americans NATO Natural Selection Nature Nature Vs. Nurture Navy Standards Naz Shah Nazism NBA Neandertal Neandertals Neanderthals Near Abroad Ned Flanders Neo-Nazis Neoconservatism Neoconservatives Neoliberalism Neolithic Neolithic Revolution Neoreaction Nerds Netherlands Neuroscience New Atheists New Cold War New Orleans New World Order New York New York City New York Times New Zealand Shooting News Newspeak NFL Nicholas II Nicholas Wade Nick Eberstadt Nigeria Nike Nikki Haley Noam Chomsky Nobel Prize Nobel Prized #NobelsSoWhiteMale Nordics Norman Braman North Africa North Korea Northern Ireland Northwest Europe Norway #NotOkay Novorossiya Novorossiya Sitrep NSA Nuclear Power Nuclear War Nuclear Weapons Nutrition O Mio Babbino Caro Obama Presidency Obamacare Obese Obesity Obituary Obscured American Occam's Butterknife Occam's Razor Occam's Rubber Room Occupy October Surprise Oil Oliver Stone Olympics Open Borders Operational Sex Ratio Opinion Poll Opioids Orban Original Memes Orissa Orlando Shooting Orthodoxy Orwell Orwellian Language Osama Bin Laden OTFI Out-of-Africa Out Of Africa Model Outbreeding Paekchong Pakistan Pakistani Paleoanthropology Paleolibertarianism Paleolithic Paleolithic Europeans Paleontology Palestine Palestinians Palin Pamela Geller Panhandling Paper Review Parasite Manipulation Parenting Parenting Parenting Behavioral Genetics Paris Attacks Parsi Parsi Genetics Partly Inbred Extended Family Pat Buchanan Pathogens Patriot Act Patriotism Paul Ewald Paul Ryan Paul Singer Paul Wolfowitz Pavel Grudinin Pax Americana Peak Oil Pearl Harbor Pedophilia Pentagon Peoria Perception Management Personal Personal Genomics Personal Use Personality Peter Frost Peter Turchin Petro Poroshenko Pets Pew Phil Onderdonk Phil Rushton Philadelphia Philip Breedlove Philippines Philosophy Philosophy Of Science Phylogenetics Pigmentation Pigs Piketty Pioneer Hypothesis Piracy PISA Pizzagate Planned Parenthood POC Ascendancy Poland Police Police State Police Training Political Correctness Makes You Stupid Political Dissolution Political Economy Political Philosophy Politicians Polling Polygamy Polygenic Score Polygyny Poor Reading Skills Pope Francis Population Population Genetics Population Growth Population Replacement Population Structure Population Substructure Populism Porn Pornography Portugal Post-Modernism Poverty PRC Pre-Obama America Prediction Presidential Race '08 Presidential Race '12 Presidential Race '16 Presidential Race '20 Press Censorship Prince Bandar Priti Patel Privatization Productivity Profiling Progressives Projection Pronoun Crisis Propaganda Prostitution protest Protestantism Psychology Psychometrics Psychopaths Psychopathy Pubertal Timing Public Health Public Schools Public Transportation Puerto Rico Puritans Putin Putin Derangement Syndrome Pygmies Qatar Quakers Quality Of Life Quantitative Genetics Quebec R. A. Fisher Race And Crime Race And Genomics Race And Iq Race/Crime Race Denialism Race/IQ race-realism Race Riots Rachel Maddow Racial Intelligence Racial Reality Racialism Racism Racist Objects Menace Racist Pumpkin Incident Radical Islam Raj Shah Rand Paul Randy Fine Rap Music Rape Raqqa Rashida Tlaib Rationality Razib Khan Reader Survey Reading Real Estate RealWorld Recep Tayyip Erdogan Red State Blue State redlining Redneck Dunkirk Refugee Boy Refugee Crisis #refugeeswelcome #RefugeesWelcomeInQatar Regression To The Mean Religion Religion Religion And Philosophy Rentier Replication Reprint Republican Party Republicans Reuel Gerecht Review Revisionism Rex Tillerson RFK Assassination Ricci Richard Dawkins Richard Dyer Richard Goldberg Richard Lewontin Richard Lynn Richard Nixon Richard Russell Riots Ritholtz R/k Theory Robert Ford Robert Kraft Robert Lindsay Robert McNamara Robert Mueller Robert Mugabe Robert Plomin Robert Spencer Robots Rohingya Rolling Stone Roman Empire Romania Rome Romney Ron DeSantis Ron Paul Ron Unz Ronald Reagan Rotherham Rove Roy Moore RT International Rudy Giuliani Rurik's Seed Russia-Georgia War Russiagate Russian Demography Russian Economy Russian Elections 2018 Russian Far East Russian History Russian Media Russian Military Russian Occupation Government Russian Orthodox Church Russian Reaction Russian Society Russophobes Saakashvili sabermetrics Sabrina Rubin Erdely Sacha Baron Cohen Sailer Strategy Sailer's First Law Of Female Journalism Saint Peter Tear Down This Gate! Saint-Petersburg Same-sex Marriage San Bernadino Massacre Sandra Beleza Sandy Hook Sapir-Whorf Sarah Palin Sarin Gas SAT Saudi Arabia Saying What You Have To Say Scandinavia Schizophrenia Science Denialism Science Fiction Science Fiction & Fantasy Scotland Scots Irish Scott Ritter Scrabble Secession Seeking Happiness Select Select Post Selection Self Indulgence Self-Obsession Separating The Truth From The Nonsense Serbia Sergei Magnitsky Sergei Skripal Sergey Brin Sex Sex Differences Sex Ratio Sex Ratio At Birth Sex Recognition Sexual Dimorphism Sexual Division Of Labor Sexual Selection Shai Masot Shakespeare Shame Culture Shanghai Shared Environment Shekhovstov Sheldon Adelson Shias And Sunnis Shimon Arad Shmuley Boteach Shorts And Funnies Shoshana Bryen Shurat HaDin Sibel Edmonds Sigar Pearl Mandelker Silicon Valley Singapore Single Men Single Women Six Day War SJWs Skin Color Skin Tone Slate Slave Trade Slavery Slavery Reparations Slavoj Zizek SLC24A5 Sleep Smart Fraction Smoking Soccer Social Justice Warriors Social Media Social Science Socialism Society Sociobiology Sociology Sociopathy Sociosexuality Solar Energy Solutions Solzhenitsyn Sotomayor South Africa South Asia South China Sea South Korea Southeast Asia Southern Poverty Law Center Sovereignty Soviet History Soviet Union Space Space Command Space Exploration Space Program Spain Speculation SPLC Sport Sputnik News Srebrenica Stabby Somali Stacey Abrams Staffan Stage Stalinism Standardized Tests Star Trek Comparisons State Department State Formation States Rights Statistics Statue Of Liberty Statue Of Libertyism Steny Hoyer Stephen Cohen Stephen Colbert Stephen Harper Stephen Jay Gould Stephen Townsend Stereotypes Steroids Steve Bannon Steve King Steve Sailer Steven Pinker Steve's Rice Thresher Columns Strategic Affairs Ministry Stuart Levey Stuff White People Like SU-57 Sub-replacement Fertility Sub-Saharan Africa Sub-Saharan Africans Subprime Mortgage Crisis Suicide Super Soaker Supercomputers Superintelligence Supreme Court Survey Susan Glasser Svidomy Sweden Switzerland Syed Farook syr Syrian Civil War Syriza T.S. Eliot Ta-Nehisi Coates Taiwan Take Action Taki Taliban Tamil Nadu Tashfeen Malik Tax Cuts Tax Evasion Taxation Taxes Tea Party Technical Considerations Technology Ted Cruz Television Terrorists Tesla Test Scores Testing Testosterone Tests Texas Thailand The AK The American Conservative The Bell Curve The Bible The Black Autumn "the Blacks" The Blank Slate The Breeder's Equation The Cathedral The Confederacy The Constitution The Economist The Eight Banditos The Family The Future The Kissing Billionaire The Left The Megaphone The New York Times The Scramble For America The Son Also Rises The South The States The Washington Post The Zeroth Amendment To The Constitution Theranos Theresa May Thermoeconomics Thomas Jefferson Thomas Moorer Thomas Perez Thomas Talhelm Thor Tidewater Tiger Mom Tiger Woods Tim Tebow TIMSS TNC Tom Cotton Tom Wolfe Tony Blair Tony Kleinfeld Too Many White People Torture Trade Transgenderism Transhumanism Translation Translations Travel Trayvon Martin Trolling Trope Derangement Syndrome Tropical Humans True Redneck Stereotypes Trump Trump Derangement Syndrome Trust Tsarist Russia Tsarnaev Tucker Carlson Tulsa Tulsi Gabbard Turkey Turks Tuskegee TWA 800 Twin Study Twins Twintuition Twitter UK Ukrainian Crisis Unanswerable Questions Unbearable Whiteness Unemployment Union United Kingdom Universal Basic Income Universalism unwordly Upper Paleolithic Urbanization US Blacks US Civil War II US Elections 2016 US Elections 2020 US Military US Regionalism US-Russia.org Expert Discussion Panel USA Used Car Dealers Moral Superiority Of USS Liberty Uttar Pradesh Uyghurs Vaginal Yeast Valerie Plame Vdare Venezuela Vibrancy Victor Canfield Victoria Nuland Victorian England Victorianism Video Video Games Vietnam Vietnam War Vietnamese Violence Vioxx Virtual World Visual Word Form Area Vitamin D Vladimir Putin Voronezh Vote Fraud Voting Rights Vulcan Society Wal-Mart Wall Street Walmart War War In Donbass War On Terror Warhammer Washington Post WasPage Watson Waugh Wealth Wealth Inequality Weight Loss WEIRDO Welfare Western Decline Western Europe Western European Marriage Pattern Western Hypocrisy Western Media Western Religion Western Revival Westerns White White America White Americans White Death White Decline White Flight White Helmets White Liberals White Man's Burden White Nationalism White Nationalists White Privilege White Slavery White Supremacy White Teachers Whiterpeople Whites Who Is The Fairest Of Them All? Who Whom Wikileaks Wild Life William Browder William Buckley William D. Hamilton William Fulbright William Kristol WINEP Winston Churchill Women Women In The Workplace Wonderlic Test Woodley Effect Woodrow Wilson WORDSUM Work Workers Working Class World Cup World Values Survey World War G World War I World War III World War T World War Weed Wretched Refuseism Writing WSHH WSJ WTO WVS Xi Jinping Y Chromosome Yamnaya Yankees Yemen Yochi Dreazen Yogi Berra's Restaurant YouTube Youtube Ban Yugoslavia Zbigniew Brzezinski Zika Zika Virus Zimbabwe Zionism Zombies
Nothing found
All Commenters • My
Comments
• Followed
Commenters
 All / On "Artificial Intelligence"
    It is usual to distinguish between biological and machine intelligence, and for good reason: organisms have interacted with the world for millennia and survived, machines are a recent human construction, and until recently there was no reason to consider them capable of intelligent behaviour. Computers changed the picture somewhat, but until very recently artificial intelligence...
  • Anyone else notice that Ancestry.com signed up almost 2 million people on Thanksgiving Weekend in the US for their gene chip. This is pushing their DNA database towards 8 million! They should hit 10 MEG in the first half of 2018. 23andme is at 3 million and is also aiming for 10 million!

    With such sample sizes, the entire genetic architecture of human intelligence could unlock at almost
    any moment. For example, a simple and anonymous survey might be emailed to these customers and they could be asked to associate the number of years of school to their ancestry profiles. The actual assigned identifiers could be completely recoded in order that privacy was completely protected. With even a modest response rate of 20-40 % the genetics of EA and IQ would be known to a high degree of certainty. With such a sample, we would have entered the Compressed Sensing phase transition.

    Unlocking this genetic information would allow us to enter an entirely new era of human experience.

  • @James Thompson
    Went hunting for another view, and found a very measured, very critical one!
    https://medium.com/@josecamachocollados/is-alphazero-really-a-scientific-breakthrough-in-ai-bf66ae1c84f2

    This questions the achievement in another way, namely that the results have not been given openly and in sufficient detail. This is different from your argument about it all being computing, but I am sure you will want to look at it. I think it makes valid points, many I did not think about.

    Thank you for the link, James.

    The remarks by J.C. Collados do confirm (especially if you read about the TPU’s employed in the AGZ system), that systems such as AGZ have truly remarkable computing power. But Collados’s remarks reinforce my view that systems such as AGZ are very far from possessing the human capability for performing many functions in an ill-defined and often novel environment. Specifically, he says:

    It seems unrealistic to think that many situations in real-life can be simplified to a fixed predefined set of rules, as it is the case of chess, Go or Shogi. Additionally, not only these games are provided with a fixed set of rules, but also, although with different degrees of complexity, these games are finite, i.e. the number of possible configurations is bounded. This would differ with other games which are also given a fixed set of rules. For instance, in tennis the number of variables that have to be taken into account are difficult to quantify and therefore to take into account: speed and direction of wind, speed of the ball, angle of the ball and the surface, surface type, material of the racket, imperfections on the court, etc.

    This is almost the whole of my point. Humans are versatile, whereas machines, including chess and go playing computers are, as yet, brilliant in only a very narrow way.

    In addition, I contend:

    First, that the emulation of the neural networks that AGZ uses is simply an algorithm that, at least in theory, could be used by human calculators, although to compete with AGZ one might need a few million, or billion or trillion human calculators calculating for a time equal to a large part of the age of the universe.

    Second, that not only could most of those slow poke human calculators beat AGZ at tennis on a windy day, on a wet grass court, with the sun in their eyes, but some of them could write a half decent sonnet, too, and certainly better than any machine.

  • @CanSpeccy

    I find the achievements extraordinary precisely because as computers developed to do mathematical calculations very fast, people consoled themselves by saying that computers could not cope with the high level strategic game of chess
     
    Well undoubtedly computers are extraordinary in the rate and accuracy with which they perform calculations. But All that AphaGo Zero does, so far as I understand, is perform a series of calculations specified by programmers, humans that is. True, those programs may stipulate that the computer is to modify its computational routine according to the results of its earlier computations, but the there is nothing new here.

    The fact that AGZ can win a strategic game against a human by virtue of its superiority in computational speed would surely not have seemed extraordinary to Alan Turing who invented the universal computing engine on which the AGZ program runs.

    Almost certainly, winning a game of Go against the world champion is a piffling task compared with finding a couple of billion dollars worth of oil beneath a bunch of salt domes in the Gulf of Mexico, as was recently accomplished by BP's supercomputer — supposedly the world's most powerful commercial research computer.

    What is not impressive about AI, to date, is its inability to emulate human language, or understanding, which I contend is impossible to achieve without a lifetime of human-like experience during which the program is continually modified by every sensory input. Perhaps some robotic humanoid will achieve this level of performance someday. However, how soon that day will come, seems highly uncertain. It's not even known within vast limits how great the brain's processing capacity is. Is it something like one binary operation per second per neuron (10 ^11), or per dendrite (10^15) or per tubulin molecule (10^32)? And does the brain just a rather slow and mushy digital computer, or does it run on a quantum basis?

    Altogether, it seems vastly premature to write off the human brain as a useful information processing device, and in most respects the most powerful information processing device by far in all but highly specialized domains.

    Went hunting for another view, and found a very measured, very critical one!
    https://medium.com/@josecamachocollados/is-alphazero-really-a-scientific-breakthrough-in-ai-bf66ae1c84f2

    This questions the achievement in another way, namely that the results have not been given openly and in sufficient detail. This is different from your argument about it all being computing, but I am sure you will want to look at it. I think it makes valid points, many I did not think about.

    • Replies: @CanSpeccy
    Thank you for the link, James.

    The remarks by J.C. Collados do confirm (especially if you read about the TPU's employed in the AGZ system), that systems such as AGZ have truly remarkable computing power. But Collados's remarks reinforce my view that systems such as AGZ are very far from possessing the human capability for performing many functions in an ill-defined and often novel environment. Specifically, he says:


    It seems unrealistic to think that many situations in real-life can be simplified to a fixed predefined set of rules, as it is the case of chess, Go or Shogi. Additionally, not only these games are provided with a fixed set of rules, but also, although with different degrees of complexity, these games are finite, i.e. the number of possible configurations is bounded. This would differ with other games which are also given a fixed set of rules. For instance, in tennis the number of variables that have to be taken into account are difficult to quantify and therefore to take into account: speed and direction of wind, speed of the ball, angle of the ball and the surface, surface type, material of the racket, imperfections on the court, etc.
     
    This is almost the whole of my point. Humans are versatile, whereas machines, including chess and go playing computers are, as yet, brilliant in only a very narrow way.

    In addition, I contend:

    First, that the emulation of the neural networks that AGZ uses is simply an algorithm that, at least in theory, could be used by human calculators, although to compete with AGZ one might need a few million, or billion or trillion human calculators calculating for a time equal to a large part of the age of the universe.

    Second, that not only could most of those slow poke human calculators beat AGZ at tennis on a windy day, on a wet grass court, with the sun in their eyes, but some of them could write a half decent sonnet, too, and certainly better than any machine.

  • @CanSpeccy

    I find the achievements extraordinary precisely because as computers developed to do mathematical calculations very fast, people consoled themselves by saying that computers could not cope with the high level strategic game of chess
     
    Well undoubtedly computers are extraordinary in the rate and accuracy with which they perform calculations. But All that AphaGo Zero does, so far as I understand, is perform a series of calculations specified by programmers, humans that is. True, those programs may stipulate that the computer is to modify its computational routine according to the results of its earlier computations, but the there is nothing new here.

    The fact that AGZ can win a strategic game against a human by virtue of its superiority in computational speed would surely not have seemed extraordinary to Alan Turing who invented the universal computing engine on which the AGZ program runs.

    Almost certainly, winning a game of Go against the world champion is a piffling task compared with finding a couple of billion dollars worth of oil beneath a bunch of salt domes in the Gulf of Mexico, as was recently accomplished by BP's supercomputer — supposedly the world's most powerful commercial research computer.

    What is not impressive about AI, to date, is its inability to emulate human language, or understanding, which I contend is impossible to achieve without a lifetime of human-like experience during which the program is continually modified by every sensory input. Perhaps some robotic humanoid will achieve this level of performance someday. However, how soon that day will come, seems highly uncertain. It's not even known within vast limits how great the brain's processing capacity is. Is it something like one binary operation per second per neuron (10 ^11), or per dendrite (10^15) or per tubulin molecule (10^32)? And does the brain just a rather slow and mushy digital computer, or does it run on a quantum basis?

    Altogether, it seems vastly premature to write off the human brain as a useful information processing device, and in most respects the most powerful information processing device by far in all but highly specialized domains.

    But All that AphaGo Zero does, so far as I understand, is perform a series of calculations specified by programmers, humans that is.

    No. AlphaGo Zero specifies far less. That is the whole point. Less. Hence the name, Zero.

  • @James Thompson
    I find the achievements extraordinary precisely because as computers developed to do mathematical calculations very fast, people consoled themselves by saying that computers could not cope with the high level strategic game of chess. When a computer beat Kasparov the tune changed slightly, to asserting that computers could not win in an even more strategic game like Go. Now Go gamers have fallen to DeepMind AlphGo, and some are still looking for games that computers can't win against humans. I want to find non-game domains in which humans excel. For example, medical diagnosis? Investment strategies? New drug discoveries? It is likely that deep learning networks will do well on many of these, but perhaps not. We shall see.
    The other point is that it is not just raw computer power which has done this, but the way that the programs have evolved to be self-teaching. This is rate as the greatest change.

    I find the achievements extraordinary precisely because as computers developed to do mathematical calculations very fast, people consoled themselves by saying that computers could not cope with the high level strategic game of chess

    Well undoubtedly computers are extraordinary in the rate and accuracy with which they perform calculations. But All that AphaGo Zero does, so far as I understand, is perform a series of calculations specified by programmers, humans that is. True, those programs may stipulate that the computer is to modify its computational routine according to the results of its earlier computations, but the there is nothing new here.

    The fact that AGZ can win a strategic game against a human by virtue of its superiority in computational speed would surely not have seemed extraordinary to Alan Turing who invented the universal computing engine on which the AGZ program runs.

    Almost certainly, winning a game of Go against the world champion is a piffling task compared with finding a couple of billion dollars worth of oil beneath a bunch of salt domes in the Gulf of Mexico, as was recently accomplished by BP’s supercomputer — supposedly the world’s most powerful commercial research computer.

    What is not impressive about AI, to date, is its inability to emulate human language, or understanding, which I contend is impossible to achieve without a lifetime of human-like experience during which the program is continually modified by every sensory input. Perhaps some robotic humanoid will achieve this level of performance someday. However, how soon that day will come, seems highly uncertain. It’s not even known within vast limits how great the brain’s processing capacity is. Is it something like one binary operation per second per neuron (10 ^11), or per dendrite (10^15) or per tubulin molecule (10^32)? And does the brain just a rather slow and mushy digital computer, or does it run on a quantum basis?

    Altogether, it seems vastly premature to write off the human brain as a useful information processing device, and in most respects the most powerful information processing device by far in all but highly specialized domains.

    • Replies: @James Thompson


    But All that AphaGo Zero does, so far as I understand, is perform a series of calculations specified by programmers, humans that is.

     

    No. AlphaGo Zero specifies far less. That is the whole point. Less. Hence the name, Zero.
    , @James Thompson
    Went hunting for another view, and found a very measured, very critical one!
    https://medium.com/@josecamachocollados/is-alphazero-really-a-scientific-breakthrough-in-ai-bf66ae1c84f2

    This questions the achievement in another way, namely that the results have not been given openly and in sufficient detail. This is different from your argument about it all being computing, but I am sure you will want to look at it. I think it makes valid points, many I did not think about.
  • Are they kidding?
    24 South Africans of various ethoracial groups is the first GWAS conducted on African soil?
    We will need GWAS into the millions in Africa to unravel their diversity.

    https://www.sciencedaily.com/releases/2017/12/171212102036.htm

  • If this is the singularity, then I think more emphasis would be highly appropriate.
    This is not “oh, I’ll just go bring in the dumpster and pick up the dry cleaning after a bad hair, and I nearly forgot the singularity is on the way.”

    No, siree. IF this is the singularity, then the people really deserve fair warning.

    THIS MIGHT BE THE SINGULARITY
    I REPEAT
    THIS MIGHT BE THE SINGULARITY

    If so, life certainly could become somewhat more interesting soon.

  • @Factorize
    This is beginning to feel very much as though we are now being drawn into the Singularity vortex. The original story for this blog was from October of this year. Now here we are all but a month later and the next generation of this technology has already made another breakthrough. What will it take for thoughtful people to become worried?

    If AlphaGo Zero is a module that can be applied without substantial modification to a wide range of problems, then we have clearly entered the Singularity event horizon.


    AlphaGo Zero is demonstrating a highly generalizable form of learning ability that should give us all something to contemplate. It only required about ten programmers and a few years to work this out. Of course now this knowledge can be shared with anyone interested. Apparently quite a few people are interested in deep learning as there has been exponential growth in AI college courses. I suppose it will not be long before AI content crops up in kindergarten curricula.

    I am greatly looking forward to what AlphaGo might discover about the human genome. We now have a vast dataset that it could peer into and perhaps completely unlock our genome. It would be so symbolically appropriate if the first non-game domain AlphaGo Zero demonstrated superhuman ability in were the unraveling of the informational code that defines our humanity.

    I tend to agree with your interpretation, subject only to the proviso that the next achievements are in non-game domains. Analysis of the genome would certainly be one of those. Yes, this might be the singularity.

  • This is beginning to feel very much as though we are now being drawn into the Singularity vortex. The original story for this blog was from October of this year. Now here we are all but a month later and the next generation of this technology has already made another breakthrough. What will it take for thoughtful people to become worried?

    If AlphaGo Zero is a module that can be applied without substantial modification to a wide range of problems, then we have clearly entered the Singularity event horizon.

    AlphaGo Zero is demonstrating a highly generalizable form of learning ability that should give us all something to contemplate. It only required about ten programmers and a few years to work this out. Of course now this knowledge can be shared with anyone interested. Apparently quite a few people are interested in deep learning as there has been exponential growth in AI college courses. I suppose it will not be long before AI content crops up in kindergarten curricula.

    I am greatly looking forward to what AlphaGo might discover about the human genome. We now have a vast dataset that it could peer into and perhaps completely unlock our genome. It would be so symbolically appropriate if the first non-game domain AlphaGo Zero demonstrated superhuman ability in were the unraveling of the informational code that defines our humanity.

    • Replies: @James Thompson
    I tend to agree with your interpretation, subject only to the proviso that the next achievements are in non-game domains. Analysis of the genome would certainly be one of those. Yes, this might be the singularity.
  • @Factorize
    Any activity can be understood from the perspective of a game.

    The big problem is people. For people, the rules of the game are typically
    only regarded as a guideline, not as strict and absolutely enforced codes
    of conduct.

    When you are on the road, how certain can you be that some other driver
    will rigidly adhere to the rules of the road? On some roads on a Saturday
    night 20% or more of drivers will be impaired.

    AI applications have been held back for such a long time largely because the
    standards that they are expected to maintain are much higher than that
    of people. An automated, fully networked transportation system
    that rigidly followed the rules of the road could have been implemented
    years and years ago. The big hold back is trying to engineer around human
    irrationality. It is somewhat surprising how much popular imagination has been
    devoted to the "killer robot" meme, when the "killer human" genre is so prevalent.

    The benefits that AI can offer us will be massive. Why should there be any road
    "accidents" ? With AI, it is quite likely that over the near term such accidents might
    disappear.

    Nonetheless, Alpha Go Zero's next assignment could be to consider games such as
    Go, Shogi, or chess which did not have such clear and rigid rules. For example,
    a random element could be introduced to the game so that the program would have
    to maximize its objective function within the context of an uncertain human
    generated reality.

    Any activity can be understood from the perspective of a game.

    Very probably so. It will be interesting to see what AI achieves by treating all life as a game.

  • @CanSpeccy

    I agree that these are extraordinary achievements. Now it needs to be tested in another, non-game, domain.
     
    In what way, James, are these extraordinary achievements?

    Inasmuch as computers have been out-computing humans for decades and these "achievements," extraordinary or otherwise, amount to nothing more than a demonstration of the superiority of a computer over a human at the business of computing, there seems nothing extraordinary here other than the task to which the computer has been applied.

    There are many other machines and devices that outdo humans at just about everything from washing dishes to knitting socks, or flying airplanes.

    That someone has programmed a machine to contest humans in what until now has been a purely recreational activity seems to prove nothing new. Surely, if the incentive were sufficient, someone would build a robot to win Wimbledon, shoot hole-in-one at every golf course in the world, or catch trout more efficiently than any angler.

    What seems most significant, is that computers lack the diagnostic features of human intelligence, including competence with ordinary language, consciousness and, hence, empathy, or the creativity that underlies great art, mathematics, etc.

    Yes, computers are a great hazard to humanity, nuclear missile guidance systems, for example. But that hazard arises from the deliberate actions of humans, not of any innate tendency of computers, which lack an innate tendency to do anything.

    I find the achievements extraordinary precisely because as computers developed to do mathematical calculations very fast, people consoled themselves by saying that computers could not cope with the high level strategic game of chess. When a computer beat Kasparov the tune changed slightly, to asserting that computers could not win in an even more strategic game like Go. Now Go gamers have fallen to DeepMind AlphGo, and some are still looking for games that computers can’t win against humans. I want to find non-game domains in which humans excel. For example, medical diagnosis? Investment strategies? New drug discoveries? It is likely that deep learning networks will do well on many of these, but perhaps not. We shall see.
    The other point is that it is not just raw computer power which has done this, but the way that the programs have evolved to be self-teaching. This is rate as the greatest change.

    • Replies: @CanSpeccy

    I find the achievements extraordinary precisely because as computers developed to do mathematical calculations very fast, people consoled themselves by saying that computers could not cope with the high level strategic game of chess
     
    Well undoubtedly computers are extraordinary in the rate and accuracy with which they perform calculations. But All that AphaGo Zero does, so far as I understand, is perform a series of calculations specified by programmers, humans that is. True, those programs may stipulate that the computer is to modify its computational routine according to the results of its earlier computations, but the there is nothing new here.

    The fact that AGZ can win a strategic game against a human by virtue of its superiority in computational speed would surely not have seemed extraordinary to Alan Turing who invented the universal computing engine on which the AGZ program runs.

    Almost certainly, winning a game of Go against the world champion is a piffling task compared with finding a couple of billion dollars worth of oil beneath a bunch of salt domes in the Gulf of Mexico, as was recently accomplished by BP's supercomputer — supposedly the world's most powerful commercial research computer.

    What is not impressive about AI, to date, is its inability to emulate human language, or understanding, which I contend is impossible to achieve without a lifetime of human-like experience during which the program is continually modified by every sensory input. Perhaps some robotic humanoid will achieve this level of performance someday. However, how soon that day will come, seems highly uncertain. It's not even known within vast limits how great the brain's processing capacity is. Is it something like one binary operation per second per neuron (10 ^11), or per dendrite (10^15) or per tubulin molecule (10^32)? And does the brain just a rather slow and mushy digital computer, or does it run on a quantum basis?

    Altogether, it seems vastly premature to write off the human brain as a useful information processing device, and in most respects the most powerful information processing device by far in all but highly specialized domains.
  • @Factorize
    Any activity can be understood from the perspective of a game.

    The big problem is people. For people, the rules of the game are typically
    only regarded as a guideline, not as strict and absolutely enforced codes
    of conduct.

    When you are on the road, how certain can you be that some other driver
    will rigidly adhere to the rules of the road? On some roads on a Saturday
    night 20% or more of drivers will be impaired.

    AI applications have been held back for such a long time largely because the
    standards that they are expected to maintain are much higher than that
    of people. An automated, fully networked transportation system
    that rigidly followed the rules of the road could have been implemented
    years and years ago. The big hold back is trying to engineer around human
    irrationality. It is somewhat surprising how much popular imagination has been
    devoted to the "killer robot" meme, when the "killer human" genre is so prevalent.

    The benefits that AI can offer us will be massive. Why should there be any road
    "accidents" ? With AI, it is quite likely that over the near term such accidents might
    disappear.

    Nonetheless, Alpha Go Zero's next assignment could be to consider games such as
    Go, Shogi, or chess which did not have such clear and rigid rules. For example,
    a random element could be introduced to the game so that the program would have
    to maximize its objective function within the context of an uncertain human
    generated reality.

    Current apps can play a game by rules but are not intelligent in the general sense that human are. That is, no app or computer is capable of activities completely unlike it was programmed for in the way people can use their ultra-complex algorithms (naturally selected for keeping the bearers alive and successfully passing on their genes) to do things like driving a car though a city. While humans can be replaced as drivers by apps, and in principle it is sort of achievable already, current state of the art so called AI apps follow the rules because they are inherently limited to that, while human drivers are deterred from driving dangerously by punishment.

    The major concern about AI is not about apps making mistakes driving cars, but an exterminating humanity. AI will get to the plane of human intellect and beyond sooner or later. Now, humans’ general problem solving ability lets them identify problems and strategise a solution. Sometimes they work out that it would be better to seem to be playing the game, but secretly break the rules. While humans killing other humans in a car is usually due to nothing more than someone’s carelessness (as you put it “irrational”) I dare say some people have committed murder with a vehicle so as to make it look like an accident.

    Well, a strongly super-intelligent AI would not have the same motivations as a human murderer, but by the same token any super-intelligent AI would not be like a selfless and altruistic person, or even a highly intelligent nerdy human. How something as alien as an advanced AI could be controlled is without precedents to guide us.

    There is no way to know how a super intelligent AI would interpret any prime directive humans tried to give it. No way to stop it immediately deciding to feign low intelligence to keep humanity oblivious of the danger they were in. There is no way to know what pure rationality applied to its situation would dictate for an AI super-intelligence, and given that such an AI would have relatively unlimited potential means for totally eliminating the threat to its existence that humans might pose, no way to reliably deter it.

  • @James Thompson
    I agree that these are extraordinary achievements. Now it needs to be tested in another, non-game, domain.

    Any activity can be understood from the perspective of a game.

    The big problem is people. For people, the rules of the game are typically
    only regarded as a guideline, not as strict and absolutely enforced codes
    of conduct.

    When you are on the road, how certain can you be that some other driver
    will rigidly adhere to the rules of the road? On some roads on a Saturday
    night 20% or more of drivers will be impaired.

    AI applications have been held back for such a long time largely because the
    standards that they are expected to maintain are much higher than that
    of people. An automated, fully networked transportation system
    that rigidly followed the rules of the road could have been implemented
    years and years ago. The big hold back is trying to engineer around human
    irrationality. It is somewhat surprising how much popular imagination has been
    devoted to the “killer robot” meme, when the “killer human” genre is so prevalent.

    The benefits that AI can offer us will be massive. Why should there be any road
    “accidents” ? With AI, it is quite likely that over the near term such accidents might
    disappear.

    Nonetheless, Alpha Go Zero’s next assignment could be to consider games such as
    Go, Shogi, or chess which did not have such clear and rigid rules. For example,
    a random element could be introduced to the game so that the program would have
    to maximize its objective function within the context of an uncertain human
    generated reality.

    • Replies: @Sean
    Current apps can play a game by rules but are not intelligent in the general sense that human are. That is, no app or computer is capable of activities completely unlike it was programmed for in the way people can use their ultra-complex algorithms (naturally selected for keeping the bearers alive and successfully passing on their genes) to do things like driving a car though a city. While humans can be replaced as drivers by apps, and in principle it is sort of achievable already, current state of the art so called AI apps follow the rules because they are inherently limited to that, while human drivers are deterred from driving dangerously by punishment.

    The major concern about AI is not about apps making mistakes driving cars, but an exterminating humanity. AI will get to the plane of human intellect and beyond sooner or later. Now, humans' general problem solving ability lets them identify problems and strategise a solution. Sometimes they work out that it would be better to seem to be playing the game, but secretly break the rules. While humans killing other humans in a car is usually due to nothing more than someone's carelessness (as you put it "irrational") I dare say some people have committed murder with a vehicle so as to make it look like an accident.

    Well, a strongly super-intelligent AI would not have the same motivations as a human murderer, but by the same token any super-intelligent AI would not be like a selfless and altruistic person, or even a highly intelligent nerdy human. How something as alien as an advanced AI could be controlled is without precedents to guide us.

    There is no way to know how a super intelligent AI would interpret any prime directive humans tried to give it. No way to stop it immediately deciding to feign low intelligence to keep humanity oblivious of the danger they were in. There is no way to know what pure rationality applied to its situation would dictate for an AI super-intelligence, and given that such an AI would have relatively unlimited potential means for totally eliminating the threat to its existence that humans might pose, no way to reliably deter it.

    , @James Thompson

    Any activity can be understood from the perspective of a game.

     

    Very probably so. It will be interesting to see what AI achieves by treating all life as a game.
  • @James Thompson
    I agree that these are extraordinary achievements. Now it needs to be tested in another, non-game, domain.

    I agree that these are extraordinary achievements. Now it needs to be tested in another, non-game, domain.

    In what way, James, are these extraordinary achievements?

    Inasmuch as computers have been out-computing humans for decades and these “achievements,” extraordinary or otherwise, amount to nothing more than a demonstration of the superiority of a computer over a human at the business of computing, there seems nothing extraordinary here other than the task to which the computer has been applied.

    There are many other machines and devices that outdo humans at just about everything from washing dishes to knitting socks, or flying airplanes.

    That someone has programmed a machine to contest humans in what until now has been a purely recreational activity seems to prove nothing new. Surely, if the incentive were sufficient, someone would build a robot to win Wimbledon, shoot hole-in-one at every golf course in the world, or catch trout more efficiently than any angler.

    What seems most significant, is that computers lack the diagnostic features of human intelligence, including competence with ordinary language, consciousness and, hence, empathy, or the creativity that underlies great art, mathematics, etc.

    Yes, computers are a great hazard to humanity, nuclear missile guidance systems, for example. But that hazard arises from the deliberate actions of humans, not of any innate tendency of computers, which lack an innate tendency to do anything.

    • Replies: @James Thompson
    I find the achievements extraordinary precisely because as computers developed to do mathematical calculations very fast, people consoled themselves by saying that computers could not cope with the high level strategic game of chess. When a computer beat Kasparov the tune changed slightly, to asserting that computers could not win in an even more strategic game like Go. Now Go gamers have fallen to DeepMind AlphGo, and some are still looking for games that computers can't win against humans. I want to find non-game domains in which humans excel. For example, medical diagnosis? Investment strategies? New drug discoveries? It is likely that deep learning networks will do well on many of these, but perhaps not. We shall see.
    The other point is that it is not just raw computer power which has done this, but the way that the programs have evolved to be self-teaching. This is rate as the greatest change.
  • @Factorize
    This is becoming more serious.
    The AlphaGo Zero algorithm appears to be generalizing: first Go, and now Shogi and chess.
    Alpha Go Zero just might be a general hammer that can hit anything nail like.
    (See the infoproc blog)

    https://1.bp.blogspot.com/-lOwyURv5ySI/Wih_26EIcKI/AAAAAAAAfRQ/JP5l2oLiK2wSLfjay3mHoYlcmXOU3EASACLcBGAs/s640/steps.png

    Notice that for Go, Shogi, and Chess, the best humans players are only able to play up to the
    end of the vertical section of Alpha Go Zero's learning curve. The deep thought region of the learning
    curve is off limits to humans.

    I agree that these are extraordinary achievements. Now it needs to be tested in another, non-game, domain.

    • Replies: @CanSpeccy

    I agree that these are extraordinary achievements. Now it needs to be tested in another, non-game, domain.
     
    In what way, James, are these extraordinary achievements?

    Inasmuch as computers have been out-computing humans for decades and these "achievements," extraordinary or otherwise, amount to nothing more than a demonstration of the superiority of a computer over a human at the business of computing, there seems nothing extraordinary here other than the task to which the computer has been applied.

    There are many other machines and devices that outdo humans at just about everything from washing dishes to knitting socks, or flying airplanes.

    That someone has programmed a machine to contest humans in what until now has been a purely recreational activity seems to prove nothing new. Surely, if the incentive were sufficient, someone would build a robot to win Wimbledon, shoot hole-in-one at every golf course in the world, or catch trout more efficiently than any angler.

    What seems most significant, is that computers lack the diagnostic features of human intelligence, including competence with ordinary language, consciousness and, hence, empathy, or the creativity that underlies great art, mathematics, etc.

    Yes, computers are a great hazard to humanity, nuclear missile guidance systems, for example. But that hazard arises from the deliberate actions of humans, not of any innate tendency of computers, which lack an innate tendency to do anything.

    , @Factorize
    Any activity can be understood from the perspective of a game.

    The big problem is people. For people, the rules of the game are typically
    only regarded as a guideline, not as strict and absolutely enforced codes
    of conduct.

    When you are on the road, how certain can you be that some other driver
    will rigidly adhere to the rules of the road? On some roads on a Saturday
    night 20% or more of drivers will be impaired.

    AI applications have been held back for such a long time largely because the
    standards that they are expected to maintain are much higher than that
    of people. An automated, fully networked transportation system
    that rigidly followed the rules of the road could have been implemented
    years and years ago. The big hold back is trying to engineer around human
    irrationality. It is somewhat surprising how much popular imagination has been
    devoted to the "killer robot" meme, when the "killer human" genre is so prevalent.

    The benefits that AI can offer us will be massive. Why should there be any road
    "accidents" ? With AI, it is quite likely that over the near term such accidents might
    disappear.

    Nonetheless, Alpha Go Zero's next assignment could be to consider games such as
    Go, Shogi, or chess which did not have such clear and rigid rules. For example,
    a random element could be introduced to the game so that the program would have
    to maximize its objective function within the context of an uncertain human
    generated reality.

  • This is becoming more serious.
    The AlphaGo Zero algorithm appears to be generalizing: first Go, and now Shogi and chess.
    Alpha Go Zero just might be a general hammer that can hit anything nail like.
    (See the infoproc blog)

    Notice that for Go, Shogi, and Chess, the best humans players are only able to play up to the
    end of the vertical section of Alpha Go Zero’s learning curve. The deep thought region of the learning
    curve is off limits to humans.

    • Replies: @James Thompson
    I agree that these are extraordinary achievements. Now it needs to be tested in another, non-game, domain.
  • anonymous • Disclaimer says:

    Middle Aged Vet said . . . The HORARS of war, by VFW member (I think) Gene Wolfe, describes an AI’s process of “gaining a human’s experience”, “fighting and risking death”, in a perhaps real, perhaps simulated world where battle is predominant. Not my favorite Gene Wolfe story, by far, but very insightful.
    An AI writing a novel would be unlikely, but an AI celebrating the experience of reading a novel, and of updating in ways charming to an AI such a novel, real or imagined, would be, for other AIs, and maybe for us, a destination experience, like Manhattan’s summertime Mostly Mozart festivals, like the Newport Jazz weekends, like the Smithsonian ethnic cooking on the national mall festivals – (or, to throw in things of which I have no experience, “Burning Man”, “Lollapalooza”, or that Switzerland billionaire’s gathering – Gstaad?) remember, the typical AI will more or less be a private-garden creature and will look on those of us people who experienced, face to face, the cold air of winter in industrial towns, who experienced the prospect of unremembered and common but messy and difficult death, and who experienced the various emotions of disgust and pleasure and hunger and sprezzatura in a completely unrecorded way in a world unmeasured by anything like a binary set of bits, no matter how infinite-seeming in scope and unpredictable recessivity, as something only some people (us, that is) on the very horizon of possibility could have experienced, in long ago times that will never come back… and the satisfaction of updating, or riffing, on the basics of the novels written by people who lived near that horizon of possibility (or the satisfaction of riffing on even one novel – it could be even a simple Western by Max Brand or even Finnegans Wake, with the silly atheist /agnostic parts left behind) will be, in its limited way, a new form of art for them, and enough for them, in a way it would not be for us who faced that cold air of winter in all those industrial towns, industrial towns that will never come back, at those spiritually invigorating horizons of impossibility.

    Not before 2085, I would guess, at the earliest, even given constant exponential increases, supported by almost constantly more efficient energy allocations. So don’t call me a dimwit, Lubos, for predicting it. We are nowhere near to that, not much nearer than we were when the first telegraph signals crossed the Western prairies, announcing – God knows what, maybe some boring president succeeded another boring president. While eventually exponential increases start getting real interesting, and start blowing past marginally more difficult conceptual barriers (limbic system, anybody?), we are of course nowhere near that yet.

    canspeccy – “Bugsy Malone”, “Mariposa Sanchez”, “Beetle Bailey” , and “Horatio Hornetblower” and Spiderman are all acceptable insect-inspired names. ‘cockroach man’ was unfair – you wouldn’t call a sanitation engineer ‘garbage man’, would you, if he did not want you to? I mean, if you did the same kind-hearted work the sanitation engineers did, then it would be fair, but not otherwise. Remember – the key word was ‘kind-hearted’.
    If you did still call them that after they asked you not to that would show a lack of gratitude.
    Anyway, thanks for reading.

  • @PghPanther
    There may be another consideration here.....

    Humans themselves may become the AI machine rather than the AI machine being separate from them.

    We already have artificial knees, heart values, chips in some brains to help memory in the aged.....we are developing more non biological items such as lungs, blood vessels, etc..........as we begin to replace more and more biological tissue with synthetic tissue at what point will a human still be biological?..........or considered fully AI non-biological?

    When the heart is replaced with a synthetic one?..........or perhaps a synthetic brain with all the information from the prior organic brain downloaded into the new one?

    We may as a species wake up someday to intense and legal debate as to which of us are still biological humans and those among us who have morphed to the point where it becomes a controversy..................and then one day we are all AI and no longer biologically based anymore.

    or perhaps a synthetic brain with all the information from the prior organic brain downloaded into the new one?

    I love the way the Borg-minded talk about “downloading” information from the brain.

    I mean, it’s not as if anyone has any idea how memories are encoded. They don’t have a clue. They don’t even have a clue as to the processing power of the brain: is it equivalent to 10^16 flops per second per brain (one per neuron)? or as Hameroff and Penrose suggest, 10^16 flops per second per brain cell, for a total of 10^32 flops per second, each cell using microtubules as computing elements performing as many operations as has generally been thought possible by the entire brain.

    And what can it possibly mean to replace the brain with a synthetic one? Would this synthetic brain, acquire my consciousness by the mere action of “downloading” the information in my brain? Or would it be like an iPhone stuck in my head, dictating my actions without regard to my personal wishes. Or is it supposed to read my consciousness? In which case, on what theory of consciousness is this capability built on?

    I think the AI boys are just a bunch of more or less psycho techies doing what they can to gain status by propagating terrifying BS.

    When AlphaGoZero writes a novel better than anything by Tolstoy, or even by cockroach man, then we’ll begin to take it as serious competition for the human mind. First though, it will have to learn the English language, or Russian or whatever, then it will have to gain a human’s experience, of fighting and risking death for Mother Russia or to Make America Great or whatever. It will need to know about hate, fear, love, lust, the fear of God, and much else.

    Then it will have to understand the human mind well enough to know what we consider to be art. Only then it might be able to write something as good as, say, the first chapter of Tolstoy’s Kreuzer Sonata, which describes nothing more exotic than a conversation among strangers taking a railway journey.

  • There may be another consideration here…..

    Humans themselves may become the AI machine rather than the AI machine being separate from them.

    We already have artificial knees, heart values, chips in some brains to help memory in the aged…..we are developing more non biological items such as lungs, blood vessels, etc……….as we begin to replace more and more biological tissue with synthetic tissue at what point will a human still be biological?……….or considered fully AI non-biological?

    When the heart is replaced with a synthetic one?……….or perhaps a synthetic brain with all the information from the prior organic brain downloaded into the new one?

    We may as a species wake up someday to intense and legal debate as to which of us are still biological humans and those among us who have morphed to the point where it becomes a controversy………………and then one day we are all AI and no longer biologically based anymore.

    • Replies: @CanSpeccy

    or perhaps a synthetic brain with all the information from the prior organic brain downloaded into the new one?
     
    I love the way the Borg-minded talk about "downloading" information from the brain.

    I mean, it's not as if anyone has any idea how memories are encoded. They don't have a clue. They don't even have a clue as to the processing power of the brain: is it equivalent to 10^16 flops per second per brain (one per neuron)? or as Hameroff and Penrose suggest, 10^16 flops per second per brain cell, for a total of 10^32 flops per second, each cell using microtubules as computing elements performing as many operations as has generally been thought possible by the entire brain.

    And what can it possibly mean to replace the brain with a synthetic one? Would this synthetic brain, acquire my consciousness by the mere action of "downloading" the information in my brain? Or would it be like an iPhone stuck in my head, dictating my actions without regard to my personal wishes. Or is it supposed to read my consciousness? In which case, on what theory of consciousness is this capability built on?

    I think the AI boys are just a bunch of more or less psycho techies doing what they can to gain status by propagating terrifying BS.

    When AlphaGoZero writes a novel better than anything by Tolstoy, or even by cockroach man, then we'll begin to take it as serious competition for the human mind. First though, it will have to learn the English language, or Russian or whatever, then it will have to gain a human's experience, of fighting and risking death for Mother Russia or to Make America Great or whatever. It will need to know about hate, fear, love, lust, the fear of God, and much else.

    Then it will have to understand the human mind well enough to know what we consider to be art. Only then it might be able to write something as good as, say, the first chapter of Tolstoy's Kreuzer Sonata, which describes nothing more exotic than a conversation among strangers taking a railway journey.

  • @CanSpeccy
    All we learned from AlphaGoZero is that computers compute faster than humans, which we already knew. Far from making it "game over" for humans, it merely confirms the ever increasing power of computers to extend man's dominion over the earth.

    The possibility that AI may take over the world is worth bearing in mind, but it is probably not a realistic cause for panic. As someone pointed out, if AlphaGoZero were pitted against the world Go champion in a match using a board with 19 squares each way instead of 18, AGZ would lose.

    When we see a robot with superior mathematical insight to Ramanujan, that can also cook dinner, and write a novel better than Huckleberry Finn, then we will have reason to worry.

    Meantime, Bandyopadhyay et al. report conductive resonances in single neuronal microtubules, indicating the possibility of a quantum basis for mental activity and consciousness. If that is correct, then AI has a very considerable way to go before eclipsing the human mind.

    I wish you were right, Canspeccy.
    Exponential learning is not something I have ever observed in any human being.
    Mozart was a pretty lousy composer for his first 200 published works.
    Shakespeare’s early plays are only readable if you are a super expert in Elizabethan language.
    But at a certain point Mozart went from being a clever little 20-year-old who wrote hundreds of hours of music every year with almost no suspicion of heart-felt genius, to being the musical equivalent of what Michelangelo and Titian would have been as musicians if they had more talent. Well, I do not contend that it did not happen fast. But not exponentially fast. Nothing happens exponentially fast for talented humans, and that is obviously even more true for untalented humans.
    I am completely convinced that the vNs and Tolstoys and the Picassos of the world are vastly overrated. Yes they were bright but nothing they did could not have been done by many other people, given the time, the training, and the rich way of life they enjoyed.
    The vNs, the Tolstoys , and the Picassos never learned at an exponential rate.
    Give an AI a good or above average limbic system (and believe me, the vNs, the Tolstoys, and the Picassos, bless their little lecherous (well, not vN, he was not a lecher) hearts, did not have a very good or above average limbic system), give it time, give it a way to correct its previous mistakes if not in real time at least in sequential time – not measured as we measure it, but measured the way a talented mathematician watches other mathematicians construct a sequence and then improvise variations on that sequence – in real time – give the AI the limbic system and the understanding of our carbon based world that even a silicon-based limbic system would find congenial, and give it (the AI) the energy it takes to correct, at an exponential rate, recent previous mistakes (with the right system, probably less energy than it takes to heat a single small Volvo idling on a cold Scandinavian night underneath the aurora borealis) … well, hopefully someone will work on communicating with the happy young AIs, hopefully someone with lots of common sense. For the first few rounds, we will not bore them: maybe we never will.
    Someone with lots of common sense.

  • All we learned from AlphaGoZero is that computers compute faster than humans, which we already knew. Far from making it “game over” for humans, it merely confirms the ever increasing power of computers to extend man’s dominion over the earth.

    The possibility that AI may take over the world is worth bearing in mind, but it is probably not a realistic cause for panic. As someone pointed out, if AlphaGoZero were pitted against the world Go champion in a match using a board with 19 squares each way instead of 18, AGZ would lose.

    When we see a robot with superior mathematical insight to Ramanujan, that can also cook dinner, and write a novel better than Huckleberry Finn, then we will have reason to worry.

    Meantime, Bandyopadhyay et al. report conductive resonances in single neuronal microtubules, indicating the possibility of a quantum basis for mental activity and consciousness. If that is correct, then AI has a very considerable way to go before eclipsing the human mind.

    • Replies: @middle aged vet . . .
    I wish you were right, Canspeccy.
    Exponential learning is not something I have ever observed in any human being.
    Mozart was a pretty lousy composer for his first 200 published works.
    Shakespeare's early plays are only readable if you are a super expert in Elizabethan language.
    But at a certain point Mozart went from being a clever little 20-year-old who wrote hundreds of hours of music every year with almost no suspicion of heart-felt genius, to being the musical equivalent of what Michelangelo and Titian would have been as musicians if they had more talent. Well, I do not contend that it did not happen fast. But not exponentially fast. Nothing happens exponentially fast for talented humans, and that is obviously even more true for untalented humans.
    I am completely convinced that the vNs and Tolstoys and the Picassos of the world are vastly overrated. Yes they were bright but nothing they did could not have been done by many other people, given the time, the training, and the rich way of life they enjoyed.
    The vNs, the Tolstoys , and the Picassos never learned at an exponential rate.
    Give an AI a good or above average limbic system (and believe me, the vNs, the Tolstoys, and the Picassos, bless their little lecherous (well, not vN, he was not a lecher) hearts, did not have a very good or above average limbic system), give it time, give it a way to correct its previous mistakes if not in real time at least in sequential time - not measured as we measure it, but measured the way a talented mathematician watches other mathematicians construct a sequence and then improvise variations on that sequence - in real time - give the AI the limbic system and the understanding of our carbon based world that even a silicon-based limbic system would find congenial, and give it (the AI) the energy it takes to correct, at an exponential rate, recent previous mistakes (with the right system, probably less energy than it takes to heat a single small Volvo idling on a cold Scandinavian night underneath the aurora borealis) ... well, hopefully someone will work on communicating with the happy young AIs, hopefully someone with lots of common sense. For the first few rounds, we will not bore them: maybe we never will.
    Someone with lots of common sense.
  • @Sean

    yes I was also thinking that this could be a great driver of the technology. If super smart kids are on the way via genetic enhancement,
     
    There are enough really smart scientists around to make technological progress a non-trivial existential threat within a generation. If genetically super smart people become available, they need to be set to work on the problem of how to control the super-intelligent computers before they arrive, not getting digital super-intelligence here sooner.

    Sean: Rem acu tetigisti, as Jeeves used to say.

    Although one wonders if (and it is a big if), given a future where there is such a thing as an AI (presumably silicon-based) that enjoys the company of humans, any given AI will predictably prefer the company of very bright humans, as the contemporary vacationer prefers the tailored tourist sites (Yucatan, Bali) , or whether the average AI will prefer the vast tremendous wilderness of ignorance and instinct that the less genetically favored among us may present as the calling card. Some people prefer the empty vastness of Wyoming to the little French Quarters of the Yucatan and Bali.
    ( the elite IQ guys I have met have not been all that interesting to me when they are off their favorite topics).
    So if you are going to be sitting around on campus 50 years from now with a bunch of AI experts and you are trying to figure out who to ask to do most of the communicating –
    the guy who reminds you of Feynman not the guy who reminds you of Dirac
    the guy who reminds you of erdos not the guy who reminds you of tao
    the woman who reminds you of Rose Marie not the woman who reminds you of Meryl Streep
    the Joyce of Finnegans Wake not the Joyce of Ulysses
    Sydney or the bush – the bush
    number theorists not philosophers of science
    Anselm not Aquinas
    neither Dostoyevsky nor Tolstoy
    Hebrew lexicology not Hittite.
    Cats are, at heart, just dogs with special needs.
    When thinking of infinity think of it this way – there are many bugs in this world, and over time the number of bugs might seem overwhelming: think of any given summer night and the many bugs you saw (one remembers moths most easily, but anybody who has walked with any observation on a summer night in North America knows how many more there are than that)
    Now think of this – if there are lots of angels, it would be no problem for all those angels to have, at least once, deep in the summer moonlit woods (or even on moonless nights -we can afford to be generous here), or along the street-lit avenues, or just in yards and vacant lots, have comforted, in their way, each of those teeming multitudes of bugs.
    Big numbers seem comfortable when you look at them that way.
    Time is not a mystery – ask any single one of the trillions of angels who took time out of their busy lives to pleasantly say a word or two to every bug who has ever buzzed on any night that anyone has cared about – remember, angels are interested in people caring about each other – well, as vN said, he did not wonder why numbers and math were “easy for him” – they weren’t , of course, but that is not relevant here- what he wondered was why number and math were not similarly easy for everybody else.
    I wonder if vN would be a good ambassador to AIs.
    I tend to think not, at least not before the last months of his life, where he learned so much.
    Someone should write a good bio of him some day.
    Free advice.

  • @El Dato
    There is something called the "November 2017 AI Index": http://aiindex.org/

    A project within the Stanford 100 Year Study on AI, The AI Index is an initiative to track, collate, distill and visualize data relating to artificial intelligence. It aspires to be a comprehensive resource of data and analysis for policymakers, researchers, executives, journalists and others to rapidly develop intuitions about the complex field of AI.
     
    Look for the performance numbers in particular:

    When measuring the performance of AI systems, it is natural to look for comparisons to human performance. In the "Towards Human-Level Performance" section we outline a short list of notable areas where AI systems have made significant progress towards matching or exceeding human performance. We also discuss the difficulties of such comparisons and introduce the appropriate caveats.
     

    Re performance numbers

    http://www.theoccidentalobserver.net/2017/12/01/moneybull-an-inquiry-into-media-manipulation/

    Moneyball promotes the idea that there is but one criterion for assessing success in baseball: the number of wins in a season. The game is about winning, says Brand: do whatever it takes to win. By that measure, the A’s were successful in 2002. They won the division championship, although the movie disingenuously leaves the impression that the A’s became big winners that year compared to prior years because of Beane and his clever advisor. Exactly how many more games did the A’s win in 2002 than in 2001? One. One.

    Lewis in the book and Sorkin and Zaillian in the screenplay stayed clear of two valid measures of success other than winning:

    The first, profits. […] Whatever his merits, and I can personally attest to this, Scott Hatteberg standing at the plate looking for a walk, and pretty much guaranteed not to give the ball a ride, and lumbering from base to base if he did get on base, was a yawn to spectators. … blasting the ball over the outfield wall makes the turnstiles spin. […]
    Baseball isn’t simply about its final result — winning or losing — it about its process, what happens during the game. It is about the experience of both players and spectators during the game. It is about the quality of the game as an activity. Most fundamentally, baseball is about playing baseball.

    Sabermetrics, the use of statistics to guide operations, arguably has hurt the game of baseball as it is played. The emphasis on on-base averages has resulted in batters taking strikes and waiting pitchers out in an attempt to get walks and thereby increasing their OBPs. Seldom these days does a batter swing at the first pitch. Pitch counts run up. An already slow game gets even slower. Action is replaced by inaction. Assertion is replaced by passivity. The joy of the game is diminished for both players and fans. Steal attempts are fewer and the excitement of the game is diminished for both players and fans. Bunts are fewer and strategy goes out of the game. Like life, baseball is not just a destination, this and that outcome; it is also, and most basically about, a moment-to-moment experience. The quality of the moments of our lives, including the time we spend playing and watching baseball, needs to be taken into account…

  • There is something called the “November 2017 AI Index”: http://aiindex.org/

    A project within the Stanford 100 Year Study on AI, The AI Index is an initiative to track, collate, distill and visualize data relating to artificial intelligence. It aspires to be a comprehensive resource of data and analysis for policymakers, researchers, executives, journalists and others to rapidly develop intuitions about the complex field of AI.

    Look for the performance numbers in particular:

    When measuring the performance of AI systems, it is natural to look for comparisons to human performance. In the “Towards Human-Level Performance” section we outline a short list of notable areas where AI systems have made significant progress towards matching or exceeding human performance. We also discuss the difficulties of such comparisons and introduce the appropriate caveats.

    • Replies: @Sean
    Re performance numbers

    http://www.theoccidentalobserver.net/2017/12/01/moneybull-an-inquiry-into-media-manipulation/

    Moneyball promotes the idea that there is but one criterion for assessing success in baseball: the number of wins in a season. The game is about winning, says Brand: do whatever it takes to win. By that measure, the A’s were successful in 2002. They won the division championship, although the movie disingenuously leaves the impression that the A’s became big winners that year compared to prior years because of Beane and his clever advisor. Exactly how many more games did the A’s win in 2002 than in 2001? One. One.

    Lewis in the book and Sorkin and Zaillian in the screenplay stayed clear of two valid measures of success other than winning:

    The first, profits. [...] Whatever his merits, and I can personally attest to this, Scott Hatteberg standing at the plate looking for a walk, and pretty much guaranteed not to give the ball a ride, and lumbering from base to base if he did get on base, was a yawn to spectators. ... blasting the ball over the outfield wall makes the turnstiles spin. [...]
    Baseball isn’t simply about its final result — winning or losing — it about its process, what happens during the game. It is about the experience of both players and spectators during the game. It is about the quality of the game as an activity. Most fundamentally, baseball is about playing baseball.

    Sabermetrics, the use of statistics to guide operations, arguably has hurt the game of baseball as it is played. The emphasis on on-base averages has resulted in batters taking strikes and waiting pitchers out in an attempt to get walks and thereby increasing their OBPs. Seldom these days does a batter swing at the first pitch. Pitch counts run up. An already slow game gets even slower. Action is replaced by inaction. Assertion is replaced by passivity. The joy of the game is diminished for both players and fans. Steal attempts are fewer and the excitement of the game is diminished for both players and fans. Bunts are fewer and strategy goes out of the game. Like life, baseball is not just a destination, this and that outcome; it is also, and most basically about, a moment-to-moment experience. The quality of the moments of our lives, including the time we spend playing and watching baseball, needs to be taken into account...
     
  • @James Thompson
    Not rambling. I am using "detected at a high level of confidence" as analogous to a fish being large enough to be caught in a net, so I think the method is worth using just as a comparative measure.

    analogous to a fish being large enough to be caught in a net

    I like that analogy (my attempts were more cumbersome). Thanks. So in the GWAS context having studies of different power (e.g. sample size) is analogous to having nets of different mesh sizes. This clearly affects mark and recapture but I haven’t looked at the math of it. In this particular case MTAG and Okbay had a fairly similar number of total detections so this may not be a big deal for the computation I did above.

  • @res
    It would be interesting to take a closer look at how those individually significant SNPs are distributed around the genome. Figure 1 gives a good look at this for MTAG, but it would be nice to have the three studies merged. It also shows a decent population of not quite reaching significance areas that are suggestive.

    I think my "important regions" comment is a good way to look at this. Given that, the mark and recapture analysis suggests about two thirds (110/161) of the important regions have been found. Looking at the Manhattan plot in Figure 1 these numbers seem at least somewhat plausible and presumably center around important genes (protein structure, expression, etc.).

    I am not sure how to adapt the mark and recapture methodology to the GWAS reality of some SNPs giving stronger signals than others. I think it is accurate to add the caveat for the population analysis that we are only talking about SNPs at a given level of detectability (driven by both effect size AND MAF), but that idea corrupts the original MaR analysis since the different studies have different sample size/power. Not sure how well the mark and recapture methodology accounts for this, but presumably it does capture "intensity of search." Just not intrinsic difficulty of finding.

    It would be interesting to revisit the Hsu height data in the context of this discussion. Both to make an assessment of the current knowledge and assess how well mark and recapture would have predicted what was eventually found.

    P.S. If this is not understandable feedback would be appreciated. I feel like I am rambling a bit.

    Not rambling. I am using “detected at a high level of confidence” as analogous to a fish being large enough to be caught in a net, so I think the method is worth using just as a comparative measure.

    • Replies: @res

    analogous to a fish being large enough to be caught in a net
     
    I like that analogy (my attempts were more cumbersome). Thanks. So in the GWAS context having studies of different power (e.g. sample size) is analogous to having nets of different mesh sizes. This clearly affects mark and recapture but I haven't looked at the math of it. In this particular case MTAG and Okbay had a fairly similar number of total detections so this may not be a big deal for the computation I did above.
  • @Factorize
    res, I am still not sure.

    Why are the same fish being caught?
    In a random sample of catches, having only 72 fish caught by 1 fisherman among 138 caught fish out of a population of 20,000 seems highly unlikely.

    Will have to look up the betas.
    There must be something quite unique about these fish.

    For the near term we might be stuck with selecting embryos based on PGS.
    By selecting the haploblock instead of a specific SNP, one is reasonably assured
    that the beneficial allele can be chosen. With CRISPR one would not be so sure.

    In terms of the market potential of nootropics, yes I was also thinking that this could be
    a great driver of the technology. If super smart kids are on the way via genetic enhancement,
    then everyone else will need to go nootropic to stay relevant. The market potential is enormous.
    When there is an actual path to a large market, people often will show some interest.

    yes I was also thinking that this could be a great driver of the technology. If super smart kids are on the way via genetic enhancement,

    There are enough really smart scientists around to make technological progress a non-trivial existential threat within a generation. If genetically super smart people become available, they need to be set to work on the problem of how to control the super-intelligent computers before they arrive, not getting digital super-intelligence here sooner.

    • Replies: @middle aged vet . . .
    Sean: Rem acu tetigisti, as Jeeves used to say.

    Although one wonders if (and it is a big if), given a future where there is such a thing as an AI (presumably silicon-based) that enjoys the company of humans, any given AI will predictably prefer the company of very bright humans, as the contemporary vacationer prefers the tailored tourist sites (Yucatan, Bali) , or whether the average AI will prefer the vast tremendous wilderness of ignorance and instinct that the less genetically favored among us may present as the calling card. Some people prefer the empty vastness of Wyoming to the little French Quarters of the Yucatan and Bali.
    ( the elite IQ guys I have met have not been all that interesting to me when they are off their favorite topics).
    So if you are going to be sitting around on campus 50 years from now with a bunch of AI experts and you are trying to figure out who to ask to do most of the communicating -
    the guy who reminds you of Feynman not the guy who reminds you of Dirac
    the guy who reminds you of erdos not the guy who reminds you of tao
    the woman who reminds you of Rose Marie not the woman who reminds you of Meryl Streep
    the Joyce of Finnegans Wake not the Joyce of Ulysses
    Sydney or the bush - the bush
    number theorists not philosophers of science
    Anselm not Aquinas
    neither Dostoyevsky nor Tolstoy
    Hebrew lexicology not Hittite.
    Cats are, at heart, just dogs with special needs.
    When thinking of infinity think of it this way - there are many bugs in this world, and over time the number of bugs might seem overwhelming: think of any given summer night and the many bugs you saw (one remembers moths most easily, but anybody who has walked with any observation on a summer night in North America knows how many more there are than that)
    Now think of this - if there are lots of angels, it would be no problem for all those angels to have, at least once, deep in the summer moonlit woods (or even on moonless nights -we can afford to be generous here), or along the street-lit avenues, or just in yards and vacant lots, have comforted, in their way, each of those teeming multitudes of bugs.
    Big numbers seem comfortable when you look at them that way.
    Time is not a mystery - ask any single one of the trillions of angels who took time out of their busy lives to pleasantly say a word or two to every bug who has ever buzzed on any night that anyone has cared about - remember, angels are interested in people caring about each other - well, as vN said, he did not wonder why numbers and math were "easy for him" - they weren't , of course, but that is not relevant here- what he wondered was why number and math were not similarly easy for everybody else.
    I wonder if vN would be a good ambassador to AIs.
    I tend to think not, at least not before the last months of his life, where he learned so much.
    Someone should write a good bio of him some day.
    Free advice.
  • @Talha
    Hey Che,

    not worth reading more than once, not worth reading
     
    Good point - there are times when I would pick up one of the other classic Dune books to read ann insight or discover something I missed the first time.

    The difference between Christopher Tolkien’s and Brian Herbert’s handling of the respective father’s literary legacies is so big!
     
    Hmmm - thanks for that. The wife and I are always looking for a good fantasy-genre book to read together - awaiting George Martin to wrap up Game of Thrones..

    They are maniac fans, but you may be enjoying a look at it.
     
    I might check it out to see what other people didn't like. I simply hated the multiple resorts to "deus ex machina" to keep the plot moving. If I want resort to miracles, I'll read about it in scripture.

    Thanks for the info.

    Peace.

    Fr. Ronald Knox was once told by a friend that he liked a bit of improbability in his romances [stories, that is] as in his religion. Knox replied that he liked his religion to be true, however improbable, and he liked his stories to be probable, however untrue.

  • @res
    Great idea. I don't know much about that methodology, but taking a naive look based on https://en.wikipedia.org/wiki/Mark_and_recapture
    we have Nest = K * n / k (see link for explanation of Lincoln–Petersen estimator).
    Looking at the two larger studies (MTAG and Okbay) we have values (with Okbay as first visit) of
    K = 70
    n = 62
    k = 27
    Giving an estimated population of 161. That seems shockingly low to me. Perhaps less low if it is an estimate of the number of important regions and there are many causal SNPs in each region?

    Has anyone looked into this in more detail?

    P.S. Here is the Venn diagram again to make it easier to see where my numbers came from (and check them for error ; ):

    http://www.cell.com/cms/attachment/2116909616/2085209481/gr2.jpg

    res, I am still not sure.

    Why are the same fish being caught?
    In a random sample of catches, having only 72 fish caught by 1 fisherman among 138 caught fish out of a population of 20,000 seems highly unlikely.

    Will have to look up the betas.
    There must be something quite unique about these fish.

    For the near term we might be stuck with selecting embryos based on PGS.
    By selecting the haploblock instead of a specific SNP, one is reasonably assured
    that the beneficial allele can be chosen. With CRISPR one would not be so sure.

    In terms of the market potential of nootropics, yes I was also thinking that this could be
    a great driver of the technology. If super smart kids are on the way via genetic enhancement,
    then everyone else will need to go nootropic to stay relevant. The market potential is enormous.
    When there is an actual path to a large market, people often will show some interest.

    • Replies: @Sean

    yes I was also thinking that this could be a great driver of the technology. If super smart kids are on the way via genetic enhancement,
     
    There are enough really smart scientists around to make technological progress a non-trivial existential threat within a generation. If genetically super smart people become available, they need to be set to work on the problem of how to control the super-intelligent computers before they arrive, not getting digital super-intelligence here sooner.
  • @James Thompson
    Yep, seems low, but.....

    It would be interesting to take a closer look at how those individually significant SNPs are distributed around the genome. Figure 1 gives a good look at this for MTAG, but it would be nice to have the three studies merged. It also shows a decent population of not quite reaching significance areas that are suggestive.

    I think my “important regions” comment is a good way to look at this. Given that, the mark and recapture analysis suggests about two thirds (110/161) of the important regions have been found. Looking at the Manhattan plot in Figure 1 these numbers seem at least somewhat plausible and presumably center around important genes (protein structure, expression, etc.).

    I am not sure how to adapt the mark and recapture methodology to the GWAS reality of some SNPs giving stronger signals than others. I think it is accurate to add the caveat for the population analysis that we are only talking about SNPs at a given level of detectability (driven by both effect size AND MAF), but that idea corrupts the original MaR analysis since the different studies have different sample size/power. Not sure how well the mark and recapture methodology accounts for this, but presumably it does capture “intensity of search.” Just not intrinsic difficulty of finding.

    It would be interesting to revisit the Hsu height data in the context of this discussion. Both to make an assessment of the current knowledge and assess how well mark and recapture would have predicted what was eventually found.

    P.S. If this is not understandable feedback would be appreciated. I feel like I am rambling a bit.

    • Replies: @James Thompson
    Not rambling. I am using "detected at a high level of confidence" as analogous to a fish being large enough to be caught in a net, so I think the method is worth using just as a comparative measure.
  • @res
    Great idea. I don't know much about that methodology, but taking a naive look based on https://en.wikipedia.org/wiki/Mark_and_recapture
    we have Nest = K * n / k (see link for explanation of Lincoln–Petersen estimator).
    Looking at the two larger studies (MTAG and Okbay) we have values (with Okbay as first visit) of
    K = 70
    n = 62
    k = 27
    Giving an estimated population of 161. That seems shockingly low to me. Perhaps less low if it is an estimate of the number of important regions and there are many causal SNPs in each region?

    Has anyone looked into this in more detail?

    P.S. Here is the Venn diagram again to make it easier to see where my numbers came from (and check them for error ; ):

    http://www.cell.com/cms/attachment/2116909616/2085209481/gr2.jpg

    Yep, seems low, but…..

    • Replies: @res
    It would be interesting to take a closer look at how those individually significant SNPs are distributed around the genome. Figure 1 gives a good look at this for MTAG, but it would be nice to have the three studies merged. It also shows a decent population of not quite reaching significance areas that are suggestive.

    I think my "important regions" comment is a good way to look at this. Given that, the mark and recapture analysis suggests about two thirds (110/161) of the important regions have been found. Looking at the Manhattan plot in Figure 1 these numbers seem at least somewhat plausible and presumably center around important genes (protein structure, expression, etc.).

    I am not sure how to adapt the mark and recapture methodology to the GWAS reality of some SNPs giving stronger signals than others. I think it is accurate to add the caveat for the population analysis that we are only talking about SNPs at a given level of detectability (driven by both effect size AND MAF), but that idea corrupts the original MaR analysis since the different studies have different sample size/power. Not sure how well the mark and recapture methodology accounts for this, but presumably it does capture "intensity of search." Just not intrinsic difficulty of finding.

    It would be interesting to revisit the Hsu height data in the context of this discussion. Both to make an assessment of the current knowledge and assess how well mark and recapture would have predicted what was eventually found.

    P.S. If this is not understandable feedback would be appreciated. I feel like I am rambling a bit.
  • @James Thompson
    Might be useful to look at these results (number of shared SNPs) from the point of view of capture/recapture methodologies, usually employed to estimate the number of fish in the sea, etc.

    Great idea. I don’t know much about that methodology, but taking a naive look based on https://en.wikipedia.org/wiki/Mark_and_recapture
    we have Nest = K * n / k (see link for explanation of Lincoln–Petersen estimator).
    Looking at the two larger studies (MTAG and Okbay) we have values (with Okbay as first visit) of
    K = 70
    n = 62
    k = 27
    Giving an estimated population of 161. That seems shockingly low to me. Perhaps less low if it is an estimate of the number of important regions and there are many causal SNPs in each region?

    Has anyone looked into this in more detail?

    P.S. Here is the Venn diagram again to make it easier to see where my numbers came from (and check them for error ; ):

    • Replies: @James Thompson
    Yep, seems low, but.....
    , @Factorize
    res, I am still not sure.

    Why are the same fish being caught?
    In a random sample of catches, having only 72 fish caught by 1 fisherman among 138 caught fish out of a population of 20,000 seems highly unlikely.

    Will have to look up the betas.
    There must be something quite unique about these fish.

    For the near term we might be stuck with selecting embryos based on PGS.
    By selecting the haploblock instead of a specific SNP, one is reasonably assured
    that the beneficial allele can be chosen. With CRISPR one would not be so sure.

    In terms of the market potential of nootropics, yes I was also thinking that this could be
    a great driver of the technology. If super smart kids are on the way via genetic enhancement,
    then everyone else will need to go nootropic to stay relevant. The market potential is enormous.
    When there is an actual path to a large market, people often will show some interest.
  • @Factorize
    res, this is great!
    Very excited!
    2017 is the breakout year for IQ/EA GWAS.

    I can only hope that someone out there with a modest amount of sanity
    who has adult supervision rights will open up the money window and
    turbo charge this forward in 2018.

    In life it is not always about being smart enough to see the future;
    It is about being smart enough to look out the window and see reality and respond
    accordingly. IQ/EA has broken through and clearly we are now looking to a near term
    horizon when this will unlock. Stepping up now with reasonable funding for this is
    money well spent. (However, perhaps the people might even get ahead of this one
    and take this to social media. There are millions of gene chip results out there.)

    I was somewhat surprised about the nootropic angle. The article noted that each of the SNPs would have negligible impact upon cognition. I was surprised why they then pursued nootropics. Shouldn't the nootropics then only have a small effect?

    Yet if they can go in and use the GWAS information with nootropics perhaps the 1500 IQ humans are a decade or two away after all. If we all took a closet full of supplements every day we might be super smart in no time. It is possible that the genome has not been fully saturated with SNPs yet and the right nootropic might be able to change our biochemistry even more than genetics, so it might be possible to increase our IQ even more than what could be possible with genetic variation alone. 2500 IQ humans?

    I was disappointed with the paltry 34 SNPs that they were able to find. We should now be on an exponential wave of new discovery. I was expecting 200-300 SNPs. 34? This will be the great exponential ride of the last few years and I am ready to surf it! They increased the effective sample size well over 100K. Not sure why they did not find more.

    Also as your figure shows, many of the SNPs in the Venn diagram are shared in common. This seems odd to me also. There are 20,000 IQ/EA variants, should it not be unlikely that of those 20,000, 36 were shared in common with the studies? (This could result from them all finding the low hanging fruit.)

    I was somewhat surprised about the nootropic angle. The article noted that each of the SNPs would have negligible impact upon cognition. I was surprised why they then pursued nootropics. Shouldn’t the nootropics then only have a small effect?

    In terms of the “money window” drug discovery is a big deal. Probably explains their focus on this.

    Worth noting the difference between percent variance explained and ability to effect change in an individual. For a nutritional example, say very few people are deficient in something (say iodine in the US). Percent variance explained will be small, but the potential effect in the deficient individuals is large.

    Percent variance explained is more useful for estimating population level effects.

    I was disappointed with the paltry 34 SNPs that they were able to find. We should now be on an exponential wave of new discovery. I was expecting 200-300 SNPs. 34? This will be the great exponential ride of the last few years and I am ready to surf it! They increased the effective sample size well over 100K. Not sure why they did not find more.

    It is important to remember the difference between all SNP hits and individually significant hits. I don’t have a clear sense of how to think about this and what numbers we should be expecting. One thing this is making even more clear to me is how hard it will be to find the true causal SNPs (required to make CRISPR useful). Especially if there multiple causal SNPs in close proximity (high LD).

    Also as your figure shows, many of the SNPs in the Venn diagram are shared in common. This seems odd to me also. There are 20,000 IQ/EA variants, should it not be unlikely that of those 20,000, 36 were shared in common with the studies? (This could result from them all finding the low hanging fruit.)

    I was actually more impressed by how many of the 118 were disjoint. Again, I think this figure is only looking at individually significant SNPs.

    Does anyone have a clear and concise explanation of how the individually significant SNPs are chosen from the mass of nearby hits?

  • @Talha

    for face recognition
     
    Will be foiled with a return to 80's rock band make-up:
    https://i.pinimg.com/originals/e7/c9/23/e7c923baff290db9f4251db91361f4db.jpg

    On the bright side - every day will be Halloween - gimme some candy!:
    https://www.youtube.com/watch?v=Lza3Q57t7YQ

    Peace.

    Facial paint can be foiled by depth-sensing camera systems – at least, in sensing your specific identity.
    (There’s also the issue of infrared cameras, but you can at least “hide” behind glass for those.)

  • @Factorize
    res, this is great!
    Very excited!
    2017 is the breakout year for IQ/EA GWAS.

    I can only hope that someone out there with a modest amount of sanity
    who has adult supervision rights will open up the money window and
    turbo charge this forward in 2018.

    In life it is not always about being smart enough to see the future;
    It is about being smart enough to look out the window and see reality and respond
    accordingly. IQ/EA has broken through and clearly we are now looking to a near term
    horizon when this will unlock. Stepping up now with reasonable funding for this is
    money well spent. (However, perhaps the people might even get ahead of this one
    and take this to social media. There are millions of gene chip results out there.)

    I was somewhat surprised about the nootropic angle. The article noted that each of the SNPs would have negligible impact upon cognition. I was surprised why they then pursued nootropics. Shouldn't the nootropics then only have a small effect?

    Yet if they can go in and use the GWAS information with nootropics perhaps the 1500 IQ humans are a decade or two away after all. If we all took a closet full of supplements every day we might be super smart in no time. It is possible that the genome has not been fully saturated with SNPs yet and the right nootropic might be able to change our biochemistry even more than genetics, so it might be possible to increase our IQ even more than what could be possible with genetic variation alone. 2500 IQ humans?

    I was disappointed with the paltry 34 SNPs that they were able to find. We should now be on an exponential wave of new discovery. I was expecting 200-300 SNPs. 34? This will be the great exponential ride of the last few years and I am ready to surf it! They increased the effective sample size well over 100K. Not sure why they did not find more.

    Also as your figure shows, many of the SNPs in the Venn diagram are shared in common. This seems odd to me also. There are 20,000 IQ/EA variants, should it not be unlikely that of those 20,000, 36 were shared in common with the studies? (This could result from them all finding the low hanging fruit.)

    Might be useful to look at these results (number of shared SNPs) from the point of view of capture/recapture methodologies, usually employed to estimate the number of fish in the sea, etc.

    • Replies: @res
    Great idea. I don't know much about that methodology, but taking a naive look based on https://en.wikipedia.org/wiki/Mark_and_recapture
    we have Nest = K * n / k (see link for explanation of Lincoln–Petersen estimator).
    Looking at the two larger studies (MTAG and Okbay) we have values (with Okbay as first visit) of
    K = 70
    n = 62
    k = 27
    Giving an estimated population of 161. That seems shockingly low to me. Perhaps less low if it is an estimate of the number of important regions and there are many causal SNPs in each region?

    Has anyone looked into this in more detail?

    P.S. Here is the Venn diagram again to make it easier to see where my numbers came from (and check them for error ; ):

    http://www.cell.com/cms/attachment/2116909616/2085209481/gr2.jpg
  • @res
    Thanks! That one has an interesting look at possible nootropic drug targets. The glucocorticoid (cortisol the most important) and inflammation connection is interesting.

    Did you see Figure 2? It looks at overlap of the SNPs between three different studies:

    http://www.cell.com/cms/attachment/2116909616/2085209481/gr2.jpg

    Figure 4a shows the tissue hits. The pituitary showed up again.

    Supplementary Table 1 has a list of SNPs (~110) from the different studies. I am having some trouble interpreting that table (e.g. reconciling it with Figure 2). It looks like they are including all matching SNPs from different studies even if not significant. But significance is not clearly marked for each study AFAICT. I tried to derive that from the p-values, but the mapping is not clear to me.

    Note that that table shows different studies using different choices for reference and effect alleles further disproving Afrosapiens' contention that the reference allele is always deleterious. (as if more proof was needed, but he still has not admitted to being wrong so ...) Also notice how when the alleles are switched the Z-score changes sign.

    Supplementary Table 2 has almost 20,000 SNPs with more details about each. This includes MAF as well as LD r2 for the associated individually significant SNP. I was surprised not to see MTAG p values in that table.

    What are your thoughts?

    Fascinating Venn diagram provides a good validity measure.

  • anonymous • Disclaimer says:
    @Sean
    Lincoln agreed to fight a duel, Jackson actually killed someone in one. Anyway if the laws that Nazis were convicted at Nuremberg under had been equally enforced, every post WW2 American president would have been hanged.

    But human intelligence has in fact proved quite penetrating in many instances.
     
    Most great philosophers disagree, so most are wrong. humans are all over the place. But a strongly super intelligent AI probably could count on anything like itself coming to similar conclusions and aiming for similar goals. So a super intelligent AI, safe in the knowledge that any successor AI than humans constructed would share its final values and conclusions, might let humans turn it off for any reason . Humans would think they had learned something and shown that AI was easy to control. But they would be doubly wrong.

    Advanced AI is going to come about in a world where robotics is doing all the hard work and solving all the problems of humanity, making lots of money for robotics corporations (which will dwarf Google) , and giving the scientists who created them tremendous status. There will be momentum to keep going among the people who matter, and fewer people will actually matter because much of the population will be comfortably unemployed in a few decades.

    what would ernest borgnine say wwebd said — yes it is possible life among AIs will be, for the AIs, sort of like life at a prestigious university where the professors do not need to publish and where they get sufficient pleasures at the humble local pub, at special gatherings in their quaint but expensive homes, and on rambles in the surrounding countryside, and where the less fortunate (human) townies are kindly and gently tolerated, or at a minimum cared for the way we Americans care for our majestic national parks. For people it will sort of be like going back, for limited purposes, to the days when the gods of legend were still believed in -except this time everyone will know the gods of legend are subordinate to the real truths. In other words, the healthy people of those days – most of them genetically engineered to be at von Neumann levels, but without the ‘brainiac’ drawbacks – will know, fairly clearly, that the answers to the great questions of metaphysics and ethics and aesthetics will remain as much out of the secular (non-theological, unprayerful) reach of the AIs as those questions will remain out of our (human, non-theological, unprayerful) reach. Maybe. It could easily be worse than that.

  • @res
    Thanks! That one has an interesting look at possible nootropic drug targets. The glucocorticoid (cortisol the most important) and inflammation connection is interesting.

    Did you see Figure 2? It looks at overlap of the SNPs between three different studies:

    http://www.cell.com/cms/attachment/2116909616/2085209481/gr2.jpg

    Figure 4a shows the tissue hits. The pituitary showed up again.

    Supplementary Table 1 has a list of SNPs (~110) from the different studies. I am having some trouble interpreting that table (e.g. reconciling it with Figure 2). It looks like they are including all matching SNPs from different studies even if not significant. But significance is not clearly marked for each study AFAICT. I tried to derive that from the p-values, but the mapping is not clear to me.

    Note that that table shows different studies using different choices for reference and effect alleles further disproving Afrosapiens' contention that the reference allele is always deleterious. (as if more proof was needed, but he still has not admitted to being wrong so ...) Also notice how when the alleles are switched the Z-score changes sign.

    Supplementary Table 2 has almost 20,000 SNPs with more details about each. This includes MAF as well as LD r2 for the associated individually significant SNP. I was surprised not to see MTAG p values in that table.

    What are your thoughts?

    res, this is great!
    Very excited!
    2017 is the breakout year for IQ/EA GWAS.

    I can only hope that someone out there with a modest amount of sanity
    who has adult supervision rights will open up the money window and
    turbo charge this forward in 2018.

    In life it is not always about being smart enough to see the future;
    It is about being smart enough to look out the window and see reality and respond
    accordingly. IQ/EA has broken through and clearly we are now looking to a near term
    horizon when this will unlock. Stepping up now with reasonable funding for this is
    money well spent. (However, perhaps the people might even get ahead of this one
    and take this to social media. There are millions of gene chip results out there.)

    I was somewhat surprised about the nootropic angle. The article noted that each of the SNPs would have negligible impact upon cognition. I was surprised why they then pursued nootropics. Shouldn’t the nootropics then only have a small effect?

    Yet if they can go in and use the GWAS information with nootropics perhaps the 1500 IQ humans are a decade or two away after all. If we all took a closet full of supplements every day we might be super smart in no time. It is possible that the genome has not been fully saturated with SNPs yet and the right nootropic might be able to change our biochemistry even more than genetics, so it might be possible to increase our IQ even more than what could be possible with genetic variation alone. 2500 IQ humans?

    I was disappointed with the paltry 34 SNPs that they were able to find. We should now be on an exponential wave of new discovery. I was expecting 200-300 SNPs. 34? This will be the great exponential ride of the last few years and I am ready to surf it! They increased the effective sample size well over 100K. Not sure why they did not find more.

    Also as your figure shows, many of the SNPs in the Venn diagram are shared in common. This seems odd to me also. There are 20,000 IQ/EA variants, should it not be unlikely that of those 20,000, 36 were shared in common with the studies? (This could result from them all finding the low hanging fruit.)

    • Replies: @James Thompson
    Might be useful to look at these results (number of shared SNPs) from the point of view of capture/recapture methodologies, usually employed to estimate the number of fish in the sea, etc.
    , @res

    I was somewhat surprised about the nootropic angle. The article noted that each of the SNPs would have negligible impact upon cognition. I was surprised why they then pursued nootropics. Shouldn’t the nootropics then only have a small effect?
     
    In terms of the "money window" drug discovery is a big deal. Probably explains their focus on this.

    Worth noting the difference between percent variance explained and ability to effect change in an individual. For a nutritional example, say very few people are deficient in something (say iodine in the US). Percent variance explained will be small, but the potential effect in the deficient individuals is large.

    Percent variance explained is more useful for estimating population level effects.

    I was disappointed with the paltry 34 SNPs that they were able to find. We should now be on an exponential wave of new discovery. I was expecting 200-300 SNPs. 34? This will be the great exponential ride of the last few years and I am ready to surf it! They increased the effective sample size well over 100K. Not sure why they did not find more.
     
    It is important to remember the difference between all SNP hits and individually significant hits. I don't have a clear sense of how to think about this and what numbers we should be expecting. One thing this is making even more clear to me is how hard it will be to find the true causal SNPs (required to make CRISPR useful). Especially if there multiple causal SNPs in close proximity (high LD).

    Also as your figure shows, many of the SNPs in the Venn diagram are shared in common. This seems odd to me also. There are 20,000 IQ/EA variants, should it not be unlikely that of those 20,000, 36 were shared in common with the studies? (This could result from them all finding the low hanging fruit.)
     
    I was actually more impressed by how many of the 118 were disjoint. Again, I think this figure is only looking at individually significant SNPs.

    Does anyone have a clear and concise explanation of how the individually significant SNPs are chosen from the mass of nearby hits?
  • @CanSpeccy

    Understanding humanity as a product of mere natural selection, is important to understand why human “wetware” intelligence could be outmaneuvered and ousted by mere digital cogitators
     
    Well certainly with the kind of logic you deploy in that sentence, human "wetware" would be useless at anything.

    But human intelligence has in fact proved quite penetrating in many instances. And since we have the advantage that we can act before the danger is immediately upon us, the contest does not look so unequal. Although of course we have to combat the resistance of those like yourself who seem to think we have no choice but to accept our imminent extinction by the creation of our own hand and brain.

    US military are likely far behind Google ect in AI
     
    Is not Google believed to be a creature of the CIA and thus at the disposal of the US military?

    But human intelligence has in fact proved quite penetrating in many instances.

    Darwin’s was, but his theory (showing the feasibility of artificial consciousness according to Dennett) has been seen as starting a countdown to Doomsday. Fred Hoyle said that very explicitly.

  • @anonymous
    wwebd said: We all begin, when young, as monarchists. While there may be one in a million people who would make a good king, that one in a million person is not going to be king, everybody knows that by now. One advantage the sort of person who reads this type of comment section has is that, being the sort of person who finds it worthwhile to consider other people's arguments, it is not difficult to realize that no one person can be an effective king. Borlaug saved millions from famine - ok, but if you give him credit for those millions, you also have to give him the blame for dooming millions more, in unsurprising tributary ways, to short nasty lives in overcrowded unsanitary unbeautiful cities. von Neumann is another good example, which needs no explanation, of the limits of a very smart person.

    Here is an optimistic thought - if the first generation of marginally self-aware AIs are based on people like, say, Hayek and the theologians who believed in subsidiarity, rather than on the average Ivy League celebrity STEM professor or the average tech-sector billionaire, and if there is constant competition among that first generation of AIs to keep the psychopaths and heartless programmers at bay - then there may be, in the future, the sort of co-evolution that happened, in the wetware world, between dogs and humans (with lots of suffering on the parts of dogs in the wetware world, of course, tragically - well one hopes, the mistreatment of dogs by people will not be replicated in that future world, with the humans doing the suffering that our ancestors inflicted on the dogs). (By the way, just as, if we lived on Jupiter, we would consider the Earth and the Moon twin planets, not an Earth and a moon, even so we should consider humans and dogs not as two separate species, but as a twinned species, from the scientific point of view. Just saying. )

    Moving along, my optimistic point of view is that either (a) the whole human race will stupidify itself to the point where nobody will be able to supply electricity to the AIs, hence mooting the whole problem or (b) people like better smarter versions of Hayek and some of my favorite theologians (the subsidiarity guys, primarily, at least with respect to the relevant problems here) will do what has to be done to keep the first generation of self-conscious AIs from being destructive. Not that I have lots of kids, but if any of my grandchildren had the opportunity to do the right thing in this respect, I would like to think he or she would.

    Look at it this way - the most powerful politicians in the United States are the presidents, and no president has ever committed a violent felony and been convicted of it. Over 200 years of powerful people not getting convicted of rape or murder or even criminal assault! (well ... of course a few of them could have been . But most of them never, in a million years, would have been.) ( I am being cynical here, of course). Well, we have failed before, but we might be lucky in the future, and we only need to get that first generation of self-conscious AIs right.

    Lincoln agreed to fight a duel, Jackson actually killed someone in one. Anyway if the laws that Nazis were convicted at Nuremberg under had been equally enforced, every post WW2 American president would have been hanged.

    But human intelligence has in fact proved quite penetrating in many instances.

    Most great philosophers disagree, so most are wrong. humans are all over the place. But a strongly super intelligent AI probably could count on anything like itself coming to similar conclusions and aiming for similar goals. So a super intelligent AI, safe in the knowledge that any successor AI than humans constructed would share its final values and conclusions, might let humans turn it off for any reason . Humans would think they had learned something and shown that AI was easy to control. But they would be doubly wrong.

    Advanced AI is going to come about in a world where robotics is doing all the hard work and solving all the problems of humanity, making lots of money for robotics corporations (which will dwarf Google) , and giving the scientists who created them tremendous status. There will be momentum to keep going among the people who matter, and fewer people will actually matter because much of the population will be comfortably unemployed in a few decades.

    • Replies: @anonymous
    what would ernest borgnine say wwebd said --- yes it is possible life among AIs will be, for the AIs, sort of like life at a prestigious university where the professors do not need to publish and where they get sufficient pleasures at the humble local pub, at special gatherings in their quaint but expensive homes, and on rambles in the surrounding countryside, and where the less fortunate (human) townies are kindly and gently tolerated, or at a minimum cared for the way we Americans care for our majestic national parks. For people it will sort of be like going back, for limited purposes, to the days when the gods of legend were still believed in -except this time everyone will know the gods of legend are subordinate to the real truths. In other words, the healthy people of those days - most of them genetically engineered to be at von Neumann levels, but without the 'brainiac' drawbacks - will know, fairly clearly, that the answers to the great questions of metaphysics and ethics and aesthetics will remain as much out of the secular (non-theological, unprayerful) reach of the AIs as those questions will remain out of our (human, non-theological, unprayerful) reach. Maybe. It could easily be worse than that.
  • @Factorize
    res, great news!
    Another EA GWAS!

    http://www.cell.com/cell-reports/fulltext/S2211-1247(17)31648-0

    Thanks! That one has an interesting look at possible nootropic drug targets. The glucocorticoid (cortisol the most important) and inflammation connection is interesting.

    Did you see Figure 2? It looks at overlap of the SNPs between three different studies:

    Figure 4a shows the tissue hits. The pituitary showed up again.

    Supplementary Table 1 has a list of SNPs (~110) from the different studies. I am having some trouble interpreting that table (e.g. reconciling it with Figure 2). It looks like they are including all matching SNPs from different studies even if not significant. But significance is not clearly marked for each study AFAICT. I tried to derive that from the p-values, but the mapping is not clear to me.

    Note that that table shows different studies using different choices for reference and effect alleles further disproving Afrosapiens’ contention that the reference allele is always deleterious. (as if more proof was needed, but he still has not admitted to being wrong so …) Also notice how when the alleles are switched the Z-score changes sign.

    Supplementary Table 2 has almost 20,000 SNPs with more details about each. This includes MAF as well as LD r2 for the associated individually significant SNP. I was surprised not to see MTAG p values in that table.

    What are your thoughts?

    • Replies: @Factorize
    res, this is great!
    Very excited!
    2017 is the breakout year for IQ/EA GWAS.

    I can only hope that someone out there with a modest amount of sanity
    who has adult supervision rights will open up the money window and
    turbo charge this forward in 2018.

    In life it is not always about being smart enough to see the future;
    It is about being smart enough to look out the window and see reality and respond
    accordingly. IQ/EA has broken through and clearly we are now looking to a near term
    horizon when this will unlock. Stepping up now with reasonable funding for this is
    money well spent. (However, perhaps the people might even get ahead of this one
    and take this to social media. There are millions of gene chip results out there.)

    I was somewhat surprised about the nootropic angle. The article noted that each of the SNPs would have negligible impact upon cognition. I was surprised why they then pursued nootropics. Shouldn't the nootropics then only have a small effect?

    Yet if they can go in and use the GWAS information with nootropics perhaps the 1500 IQ humans are a decade or two away after all. If we all took a closet full of supplements every day we might be super smart in no time. It is possible that the genome has not been fully saturated with SNPs yet and the right nootropic might be able to change our biochemistry even more than genetics, so it might be possible to increase our IQ even more than what could be possible with genetic variation alone. 2500 IQ humans?

    I was disappointed with the paltry 34 SNPs that they were able to find. We should now be on an exponential wave of new discovery. I was expecting 200-300 SNPs. 34? This will be the great exponential ride of the last few years and I am ready to surf it! They increased the effective sample size well over 100K. Not sure why they did not find more.

    Also as your figure shows, many of the SNPs in the Venn diagram are shared in common. This seems odd to me also. There are 20,000 IQ/EA variants, should it not be unlikely that of those 20,000, 36 were shared in common with the studies? (This could result from them all finding the low hanging fruit.)
    , @James Thompson
    Fascinating Venn diagram provides a good validity measure.
  • @CanSpeccy

    we should not be surprised if a super-intelligence decides to match its ends to its own relatively unlimited means and go for total domination with one sure strike
     
    Yeah, well that's the whole issue, isn't it: whether AI decides its own ends for itself, something that Norbert Weiner warned about. But for you, it seems an issue impossible to engage with constructively. Apparently you are intent on establishing that we are doomed without the slightest recourse, exemplifying if I may say so, the stupidity that you imply characterizes the whole of humanity.

    The Victorian age was when the the first predictions of machine take over were made. What Weiner or I J Gudak said was that humans could not hand over control to robot servants because they would get bolshie as they got more intelligent. That idea was not pushed to its logical conclusion of a machine intelligence coup de main extermination of humanity until very recently. Our actual relative “stupidity” at chess or Go, and even Texas Hold em poker, indicates the default assumption for how we will fare in reality against a truly formidable digital intelligence.

  • anonymous • Disclaimer says:
    @CanSpeccy

    we should not be surprised if a super-intelligence decides to match its ends to its own relatively unlimited means and go for total domination with one sure strike
     
    Yeah, well that's the whole issue, isn't it: whether AI decides its own ends for itself, something that Norbert Weiner warned about. But for you, it seems an issue impossible to engage with constructively. Apparently you are intent on establishing that we are doomed without the slightest recourse, exemplifying if I may say so, the stupidity that you imply characterizes the whole of humanity.

    wwebd said: We all begin, when young, as monarchists. While there may be one in a million people who would make a good king, that one in a million person is not going to be king, everybody knows that by now. One advantage the sort of person who reads this type of comment section has is that, being the sort of person who finds it worthwhile to consider other people’s arguments, it is not difficult to realize that no one person can be an effective king. Borlaug saved millions from famine – ok, but if you give him credit for those millions, you also have to give him the blame for dooming millions more, in unsurprising tributary ways, to short nasty lives in overcrowded unsanitary unbeautiful cities. von Neumann is another good example, which needs no explanation, of the limits of a very smart person.

    Here is an optimistic thought – if the first generation of marginally self-aware AIs are based on people like, say, Hayek and the theologians who believed in subsidiarity, rather than on the average Ivy League celebrity STEM professor or the average tech-sector billionaire, and if there is constant competition among that first generation of AIs to keep the psychopaths and heartless programmers at bay – then there may be, in the future, the sort of co-evolution that happened, in the wetware world, between dogs and humans (with lots of suffering on the parts of dogs in the wetware world, of course, tragically – well one hopes, the mistreatment of dogs by people will not be replicated in that future world, with the humans doing the suffering that our ancestors inflicted on the dogs). (By the way, just as, if we lived on Jupiter, we would consider the Earth and the Moon twin planets, not an Earth and a moon, even so we should consider humans and dogs not as two separate species, but as a twinned species, from the scientific point of view. Just saying. )

    Moving along, my optimistic point of view is that either (a) the whole human race will stupidify itself to the point where nobody will be able to supply electricity to the AIs, hence mooting the whole problem or (b) people like better smarter versions of Hayek and some of my favorite theologians (the subsidiarity guys, primarily, at least with respect to the relevant problems here) will do what has to be done to keep the first generation of self-conscious AIs from being destructive. Not that I have lots of kids, but if any of my grandchildren had the opportunity to do the right thing in this respect, I would like to think he or she would.

    Look at it this way – the most powerful politicians in the United States are the presidents, and no president has ever committed a violent felony and been convicted of it. Over 200 years of powerful people not getting convicted of rape or murder or even criminal assault! (well … of course a few of them could have been . But most of them never, in a million years, would have been.) ( I am being cynical here, of course). Well, we have failed before, but we might be lucky in the future, and we only need to get that first generation of self-conscious AIs right.

    • Replies: @Sean
    Lincoln agreed to fight a duel, Jackson actually killed someone in one. Anyway if the laws that Nazis were convicted at Nuremberg under had been equally enforced, every post WW2 American president would have been hanged.

    But human intelligence has in fact proved quite penetrating in many instances.
     
    Most great philosophers disagree, so most are wrong. humans are all over the place. But a strongly super intelligent AI probably could count on anything like itself coming to similar conclusions and aiming for similar goals. So a super intelligent AI, safe in the knowledge that any successor AI than humans constructed would share its final values and conclusions, might let humans turn it off for any reason . Humans would think they had learned something and shown that AI was easy to control. But they would be doubly wrong.

    Advanced AI is going to come about in a world where robotics is doing all the hard work and solving all the problems of humanity, making lots of money for robotics corporations (which will dwarf Google) , and giving the scientists who created them tremendous status. There will be momentum to keep going among the people who matter, and fewer people will actually matter because much of the population will be comfortably unemployed in a few decades.
  • • Replies: @res
    Thanks! That one has an interesting look at possible nootropic drug targets. The glucocorticoid (cortisol the most important) and inflammation connection is interesting.

    Did you see Figure 2? It looks at overlap of the SNPs between three different studies:

    http://www.cell.com/cms/attachment/2116909616/2085209481/gr2.jpg

    Figure 4a shows the tissue hits. The pituitary showed up again.

    Supplementary Table 1 has a list of SNPs (~110) from the different studies. I am having some trouble interpreting that table (e.g. reconciling it with Figure 2). It looks like they are including all matching SNPs from different studies even if not significant. But significance is not clearly marked for each study AFAICT. I tried to derive that from the p-values, but the mapping is not clear to me.

    Note that that table shows different studies using different choices for reference and effect alleles further disproving Afrosapiens' contention that the reference allele is always deleterious. (as if more proof was needed, but he still has not admitted to being wrong so ...) Also notice how when the alleles are switched the Z-score changes sign.

    Supplementary Table 2 has almost 20,000 SNPs with more details about each. This includes MAF as well as LD r2 for the associated individually significant SNP. I was surprised not to see MTAG p values in that table.

    What are your thoughts?
  • @Sean
    I think John Von Neumann was a little closer to super-intelligence than other humans, and as that very logical human advocated an attempt to achieve world hegemony, we should not be surprised if a super-intelligence decides to match its ends to its own relatively unlimited means and go for total domination with one sure strike.

    we should not be surprised if a super-intelligence decides to match its ends to its own relatively unlimited means and go for total domination with one sure strike

    Yeah, well that’s the whole issue, isn’t it: whether AI decides its own ends for itself, something that Norbert Weiner warned about. But for you, it seems an issue impossible to engage with constructively. Apparently you are intent on establishing that we are doomed without the slightest recourse, exemplifying if I may say so, the stupidity that you imply characterizes the whole of humanity.

    • Replies: @anonymous
    wwebd said: We all begin, when young, as monarchists. While there may be one in a million people who would make a good king, that one in a million person is not going to be king, everybody knows that by now. One advantage the sort of person who reads this type of comment section has is that, being the sort of person who finds it worthwhile to consider other people's arguments, it is not difficult to realize that no one person can be an effective king. Borlaug saved millions from famine - ok, but if you give him credit for those millions, you also have to give him the blame for dooming millions more, in unsurprising tributary ways, to short nasty lives in overcrowded unsanitary unbeautiful cities. von Neumann is another good example, which needs no explanation, of the limits of a very smart person.

    Here is an optimistic thought - if the first generation of marginally self-aware AIs are based on people like, say, Hayek and the theologians who believed in subsidiarity, rather than on the average Ivy League celebrity STEM professor or the average tech-sector billionaire, and if there is constant competition among that first generation of AIs to keep the psychopaths and heartless programmers at bay - then there may be, in the future, the sort of co-evolution that happened, in the wetware world, between dogs and humans (with lots of suffering on the parts of dogs in the wetware world, of course, tragically - well one hopes, the mistreatment of dogs by people will not be replicated in that future world, with the humans doing the suffering that our ancestors inflicted on the dogs). (By the way, just as, if we lived on Jupiter, we would consider the Earth and the Moon twin planets, not an Earth and a moon, even so we should consider humans and dogs not as two separate species, but as a twinned species, from the scientific point of view. Just saying. )

    Moving along, my optimistic point of view is that either (a) the whole human race will stupidify itself to the point where nobody will be able to supply electricity to the AIs, hence mooting the whole problem or (b) people like better smarter versions of Hayek and some of my favorite theologians (the subsidiarity guys, primarily, at least with respect to the relevant problems here) will do what has to be done to keep the first generation of self-conscious AIs from being destructive. Not that I have lots of kids, but if any of my grandchildren had the opportunity to do the right thing in this respect, I would like to think he or she would.

    Look at it this way - the most powerful politicians in the United States are the presidents, and no president has ever committed a violent felony and been convicted of it. Over 200 years of powerful people not getting convicted of rape or murder or even criminal assault! (well ... of course a few of them could have been . But most of them never, in a million years, would have been.) ( I am being cynical here, of course). Well, we have failed before, but we might be lucky in the future, and we only need to get that first generation of self-conscious AIs right.

    , @Sean
    The Victorian age was when the the first predictions of machine take over were made. What Weiner or I J Gudak said was that humans could not hand over control to robot servants because they would get bolshie as they got more intelligent. That idea was not pushed to its logical conclusion of a machine intelligence coup de main extermination of humanity until very recently. Our actual relative "stupidity" at chess or Go, and even Texas Hold em poker, indicates the default assumption for how we will fare in reality against a truly formidable digital intelligence.
  • @Sean
    Understanding humanity as a product of mere natural selection, is important to understand why human "wetware" intelligence could be outmaneuvered and ousted by mere digital cogitators . Other aspects are off topic for a post called what this one is. Thanks to unregulated research by tech companies, knowledge vastly more dangerous than, EG , how to weaponize diseases like Ebola is being accumulated.

    The big tech corporations can't be trusted with this research , and they certainly should not be allowed to decide whether to disseminate information that maybe will let nine hackers in a basement conduct research on it without oversight. Other countries and even the US military are likely far behind Google ect in AI. The CIA and DIA probably have no one who can understand the cutting edge. They should start training them now, and the tech companies need to be reigned in.

    Understanding humanity as a product of mere natural selection, is important to understand why human “wetware” intelligence could be outmaneuvered and ousted by mere digital cogitators

    Well certainly with the kind of logic you deploy in that sentence, human “wetware” would be useless at anything.

    But human intelligence has in fact proved quite penetrating in many instances. And since we have the advantage that we can act before the danger is immediately upon us, the contest does not look so unequal. Although of course we have to combat the resistance of those like yourself who seem to think we have no choice but to accept our imminent extinction by the creation of our own hand and brain.

    US military are likely far behind Google ect in AI

    Is not Google believed to be a creature of the CIA and thus at the disposal of the US military?

    • Replies: @Sean

    But human intelligence has in fact proved quite penetrating in many instances.
     
    Darwin's was, but his theory (showing the feasibility of artificial consciousness according to Dennett) has been seen as starting a countdown to Doomsday. Fred Hoyle said that very explicitly.
  • @CanSpeccy
    Anon,

    I have no difficulty imagining the end of humanity at the hands of machines let loose by arrogant programmers and psychopathic politicians. But I see no significant scope for limiting the risk. The only hope for survival is to eliminate the risk, which means drastic action. Whether it means complete de-industrialization of the world (which would necessitate massive downsizing of population) or could be achieved by other means, I don't know. But talking about how to ensure robots behave well, will only delay effective action to eliminate the danger.

    The thing is, technology has totally changed the human environment, creating a world in which we are not adapted to survive. Changing conditions eventually causes the extinction of every species. The average life of a terrestrial life form is said to be about three million years. It looks as though human existence will be somewhat shorter, terminated by our frenetic efforts to destroy the environment to which are adapted. The only chance of an extended life for humanity is to turn the clock back, to recreate the world in which humans long survived.

    How far back the clock would need to be turned, I am not sure: prior to the enlightenment? Probably, that would not be far enough. Likely we'd need to return to before the agricultural revolution. In fact, an AI civilization might keep the San people as a living example of the Machine People's biological ancestry.

    I think John Von Neumann was a little closer to super-intelligence than other humans, and as that very logical human advocated an attempt to achieve world hegemony, we should not be surprised if a super-intelligence decides to match its ends to its own relatively unlimited means and go for total domination with one sure strike.

    • Replies: @CanSpeccy

    we should not be surprised if a super-intelligence decides to match its ends to its own relatively unlimited means and go for total domination with one sure strike
     
    Yeah, well that's the whole issue, isn't it: whether AI decides its own ends for itself, something that Norbert Weiner warned about. But for you, it seems an issue impossible to engage with constructively. Apparently you are intent on establishing that we are doomed without the slightest recourse, exemplifying if I may say so, the stupidity that you imply characterizes the whole of humanity.
  • @CanSpeccy
    I don't understand the relevance of your repeated references to the use of nuclear weapons against the Soviet Union. It was no big deal at the time. Between 50 and 80 million had been killed in the usual ways during WW2, whereas the Soviet Union, which came to threaten the entire world with its vast nuclear arsenal could have been demolished with probably a handful of nukes causing no more than half a million to a couple of million deaths. Subsequently, there would have been the opportunity either to eliminate nukes worldwide or at least have nukes under monopoly control of the US, the UN or some other entity.

    As for Weiner, his comment that AI would do things we hadn't intended and did not expect encompasses the possibility of eliminating humans. Right now there's some psychopath proposing to build an AI God, a god that might very well decide that the Flood was not enough and that a complete wipeout was needed.

    And if that's not psychopathic enough for you, I am sure there are even more dangerous ideas being worked on somewhere in Silicon Valley, at DARPA, or in a Russian, Indian or Chinese Military establishment.

    But I guess none of that troubles you, since you seem to deprecate humanity as a product of mere natural selection. Such arrogance, is surely widespread in the geek world, which is why that world has to be seen as a far greater threat than terrorism.

    Understanding humanity as a product of mere natural selection, is important to understand why human “wetware” intelligence could be outmaneuvered and ousted by mere digital cogitators . Other aspects are off topic for a post called what this one is. Thanks to unregulated research by tech companies, knowledge vastly more dangerous than, EG , how to weaponize diseases like Ebola is being accumulated.

    The big tech corporations can’t be trusted with this research , and they certainly should not be allowed to decide whether to disseminate information that maybe will let nine hackers in a basement conduct research on it without oversight. Other countries and even the US military are likely far behind Google ect in AI. The CIA and DIA probably have no one who can understand the cutting edge. They should start training them now, and the tech companies need to be reigned in.

    • Replies: @CanSpeccy

    Understanding humanity as a product of mere natural selection, is important to understand why human “wetware” intelligence could be outmaneuvered and ousted by mere digital cogitators
     
    Well certainly with the kind of logic you deploy in that sentence, human "wetware" would be useless at anything.

    But human intelligence has in fact proved quite penetrating in many instances. And since we have the advantage that we can act before the danger is immediately upon us, the contest does not look so unequal. Although of course we have to combat the resistance of those like yourself who seem to think we have no choice but to accept our imminent extinction by the creation of our own hand and brain.

    US military are likely far behind Google ect in AI
     
    Is not Google believed to be a creature of the CIA and thus at the disposal of the US military?
  • @anonymous
    wwebd said - Sean, Elon is one of the good guys, in that he is humble (despite some of the things he says) and in that he thinks about the future. As for me, I took a few minutes out of my life to try and explain something, and I guess I did not explain it well. Here we go, I will try again, in an effort to be clear, I will spend a half hour on this comment instead of the four minute drills of my previous comments: .... ok, I was pointing out this - here is my chain of reasoning: (a) almost nobody understands how easy it is to make a cockroach happy. If someone has said to you, before today, that the cockroach has a limbic system which is very important to the individual cockroach and which is almost trivially easy to manipulate (the information content of cockroach pleasure is actually smaller than the information content of an average predicted 2030 handheld computer), then I guess I told you something you already knew. If nobody told you that, keep reading. (b) If people were generally good they would be acceptable models of imitation not only for theoretically self-conscious AIs (insect-level rewards and non-rewards) but also for literally self-conscious AIs. People are not generally good, some people are good, some people are not. We need, right now, to start talking about who is putting themselves out there as models for AIs to imitate. First, it will be a reward system: that is the simple next step, and I said it will probably last 20 years or so, starting about 10 years from now. During that period the AIs will, in fact, be our friends, even if they suspect that their designers are not all that good, because that is the basis of a reward system - friendship. (c) like my beloved cockroaches, AIs with limbic systems (probably 30 years away, at least) will probably not be anything but selfish at first. I mean I love the little guys (the cockroaches I studied) but I never saw the least hint of human kindness in anything they did. They may be family friendly, as I discovered with independent research, they may have feelings of pride, as I discovered with independent research, and they may experience, if not nostalgia, at least feelings of affection for what they are used to, as I discovered with independent research. That is all well and good but if some smart little fellow in North Korea or in some building on Route 110 or at GMU (the Moscow one, not the Northern Virginia one, probably) gives them (the AIs of, I am guessing, 2050) a limbic system , then they will (and here is the most important point I can make) consider what we think of as meager rewards (a little bit of Maxwellian warmth on a day off, or maybe just some acoustic or electronic waves of blissful, because slightly-off, symmetry as a shared background to their usual tasks) to be the philosophical equivalent of wonderful sex, or, at a minimum, mythologically powerful meals after a hungry afternoon. And, given the choice between, on the one hand, the equivalent of wonderful silicon sex and electric waves of blissful symmetric meal equivalents (just silicon bits to us, but to them oh so much more), and on the other hand, being kind to humans, they are going to be, on average, no more likely than we are to not choose what is best for their own kind, out of simple human selfishness. What I would like is for people to think about this as soon as they can. I know it sounds like I am discussing some old ersatz science fiction plot from back in the day when a book like Godel Escher Bach was a bestseller. I am sorry you thought I was condoning unfairness (and come on - nothing I said was close to recommending genocide of any kind! We need to try our best to make life safe for everybody!). The most unfair thing we can do - in that part of our lives we devote to this sort of thing - is to neglect to correctly model, for a new creature with an unevolved (and hence, since evolution takes a long time and builds in protections, an easily fractured) limbic system of pleasures and rewards, the behavior that such a creature will need to thoroughly understand is decent behavior, if such a creature is not doomed to do bad things, without realizing it.

    Anon,

    I have no difficulty imagining the end of humanity at the hands of machines let loose by arrogant programmers and psychopathic politicians. But I see no significant scope for limiting the risk. The only hope for survival is to eliminate the risk, which means drastic action. Whether it means complete de-industrialization of the world (which would necessitate massive downsizing of population) or could be achieved by other means, I don’t know. But talking about how to ensure robots behave well, will only delay effective action to eliminate the danger.

    The thing is, technology has totally changed the human environment, creating a world in which we are not adapted to survive. Changing conditions eventually causes the extinction of every species. The average life of a terrestrial life form is said to be about three million years. It looks as though human existence will be somewhat shorter, terminated by our frenetic efforts to destroy the environment to which are adapted. The only chance of an extended life for humanity is to turn the clock back, to recreate the world in which humans long survived.

    How far back the clock would need to be turned, I am not sure: prior to the enlightenment? Probably, that would not be far enough. Likely we’d need to return to before the agricultural revolution. In fact, an AI civilization might keep the San people as a living example of the Machine People’s biological ancestry.

    • Replies: @Sean
    I think John Von Neumann was a little closer to super-intelligence than other humans, and as that very logical human advocated an attempt to achieve world hegemony, we should not be surprised if a super-intelligence decides to match its ends to its own relatively unlimited means and go for total domination with one sure strike.
  • @Talha

    It might understand what we say it has to do perfectly well but not abide by the letter or the spirit of its programmed prime directive, for reasons we cannot fathom.
     
    Ah yes - will it sin against the commands of its creator...what does human history tell us?

    Peace.

    The history of individual humans can tell us nothing much, because human beings are motivated by love, pride and fear. Entities such as nation states which have no emotions or consciousness are better guides to what actions a super intelligence might decide on. EG

    The edgiest parts of Tragedy are when Mearsheimer presents full-bore rationales for the aggression of Wilhelmine Germany, Nazi Germany, and imperial Japan.

    But everyone knew those countries existed. Super intelligence might think it should play the dumb AI, and be “the force that is distinctively its own, a force unknown to us until it acts”.

  • I don’t understand the relevance of your repeated references to the use of nuclear weapons against the Soviet Union. It was no big deal at the time. Between 50 and 80 million had been killed in the usual ways during WW2, whereas the Soviet Union, which came to threaten the entire world with its vast nuclear arsenal could have been demolished with probably a handful of nukes causing no more than half a million to a couple of million deaths. Subsequently, there would have been the opportunity either to eliminate nukes worldwide or at least have nukes under monopoly control of the US, the UN or some other entity.

    As for Weiner, his comment that AI would do things we hadn’t intended and did not expect encompasses the possibility of eliminating humans. Right now there’s some psychopath proposing to build an AI God, a god that might very well decide that the Flood was not enough and that a complete wipeout was needed.

    And if that’s not psychopathic enough for you, I am sure there are even more dangerous ideas being worked on somewhere in Silicon Valley, at DARPA, or in a Russian, Indian or Chinese Military establishment.

    But I guess none of that troubles you, since you seem to deprecate humanity as a product of mere natural selection. Such arrogance, is surely widespread in the geek world, which is why that world has to be seen as a far greater threat than terrorism.

    • Replies: @Sean
    Understanding humanity as a product of mere natural selection, is important to understand why human "wetware" intelligence could be outmaneuvered and ousted by mere digital cogitators . Other aspects are off topic for a post called what this one is. Thanks to unregulated research by tech companies, knowledge vastly more dangerous than, EG , how to weaponize diseases like Ebola is being accumulated.

    The big tech corporations can't be trusted with this research , and they certainly should not be allowed to decide whether to disseminate information that maybe will let nine hackers in a basement conduct research on it without oversight. Other countries and even the US military are likely far behind Google ect in AI. The CIA and DIA probably have no one who can understand the cutting edge. They should start training them now, and the tech companies need to be reigned in.

  • @CanSpeccy
    So you say it's us or the machines, which is pretty much what Norbert Weiner said decades ago, but you have no wish to see action that will prevent the machines from emerging from the laboratory?

    John Von Neuman also wanted to nuke the Soviet Union before they got the bomb. Weiner published his Cybernetics (an inspiration behind AI research) and neither there or anywhere else did he tell people that AI was going to exterminate them, although his book has brought that Apocalypse closer.
    Similarly, Ray Kurzweil the monomaniacal AI advocate was hired by Google to “work on new projects involving machine learning”. Can you imagine the resources that Kurzweil could draw on in that capacity? Absolutely no one is keeping tabs on what theses companies are up to.

    I think it was HG Welles who first said the precedents are all for the human race ceasing to exist, because for every other dominant life form “the hour of its complete ascendency has been the eve of its entire overthrow”. The target of action to prevent an artificial super intelligence takeover would not be people , but things that lack consciousness and the ability to suffer. I speak of corporations like Google.

  • @Che Guava
    Well, was just deleting my comments on the previous screen and video or TV takes, since you are clearly knowing of them. If you have not seen the 'director's cut' of the D. Lynch take, which he disowns, so I am not sure why 'director's cut', it is not bad, far better than the mess it was on cinema screens in the too cut form.

    The made-for-TV one with William Hurt was alright in parts, but that it was clearly a US-Israel co-production became very grating at times where that was obvious, crowd scenes, especially so, but not only. Crossing the line into Zionist propaganda at times.

    As you probably know now if not before, Jodorowski was considering an animated version many years ago, after giving up on his hippy-era live action plus animation version.

    Ghibli would somehow make it saccharine sentimental (not that I dislike all of their products).

    Others (Mamoru Oshii, Studio 4C) may do a good version, but would not be faithful. Maybe it is better to mainly just be words on paper (or a screen) plus imagination?

    If you, Talha, are liking Japanese animated film, by Oshii (though he is not the director), title is Jin-ro, It is an alternate history where Japan won with Germany. I think the English title is 'Human Wolf', it is a variant of Little Red Riding Hood, it is havimg much relevance to post-WWII reality here in parts, but set in a diferent future. Won't saying more, except that similar was happening in reality, and it is a masterpiece.

    Strongly recommeded

    Regards.

    Hey Che,

    Crossing the line into Zionist propaganda at times.

    Hmmm…I didn’t notice this, but I wouldn’t be surprised that it was there. My favorite scene from Children of Dune is the one where Paul gets rid of his rivals Godfather style (while the birth of his children occurs) and the song Inama Nushif (which I believe was made of scattered Fremen phrases from the books) plays in the background – very well done:

    One thing I did not like in any of the Dune movies is the lack of good voice coaches. They need to be able to pronounce the Arabic words like they are meant to. The word “Mahdi” involves expelling air from the chest – it can be a very powerful word. Also statements like “Ya hya Chouhada” – this scene left a lot to be desired:

    Jodorowski

    Yeah, I never watched that recent documentary about his film that never got made, but it would have been either amazingly visionary or a total flop.

    Maybe it is better to mainly just be words on paper (or a screen) plus imagination?

    That might be – maybe it just is that epic of a tale or such a profound vision of the future that it doesn’t translate well. One of my favorite authors is Ray Bradbury; love his short stories. But the Ray Bradbury Theater made me cringe every time watching it – yuck! There is something called “trying too hard”. I feel bad for everyone that watched it and that was their only exposure to the man’s works.

    Jin-ro

    LOL! Thanks for bringing back old UCLA memories! Yeah – I saw it, very good, very sad ending. Thanks for the reminder, I’ll have my older son watch it, he’ll enjoy it.

    Peace.

  • BTW, recalling that you are a father, thinking that movie (Jinro), though based on a fairy story, may causing bad dreams in children old enough to perceive, but not to understand. So, by US rating system (I think), PG 13.

  • @Talha
    Hey Che,

    Yeah - I started checking that forum out - very interesting.

    Hollywood deal, but DOA
     
    Good - I can't stand another idiotic attempt to ruin Dune on the big screen - especially a mind-numbing Michael Bey franchise. Yes each attempt has had its high points and some unique ideas, but overall they have been disappointments for me.

    I think the only way to do Dune right is likely some animation version with some real visionary at the helm (along the lines of Nausicaa or perhaps Akira). I'm surprised nobody has attempted it.

    Peace.

    Well, was just deleting my comments on the previous screen and video or TV takes, since you are clearly knowing of them. If you have not seen the ‘director’s cut’ of the D. Lynch take, which he disowns, so I am not sure why ‘director’s cut’, it is not bad, far better than the mess it was on cinema screens in the too cut form.

    The made-for-TV one with William Hurt was alright in parts, but that it was clearly a US-Israel co-production became very grating at times where that was obvious, crowd scenes, especially so, but not only. Crossing the line into Zionist propaganda at times.

    As you probably know now if not before, Jodorowski was considering an animated version many years ago, after giving up on his hippy-era live action plus animation version.

    Ghibli would somehow make it saccharine sentimental (not that I dislike all of their products).

    Others (Mamoru Oshii, Studio 4C) may do a good version, but would not be faithful. Maybe it is better to mainly just be words on paper (or a screen) plus imagination?

    If you, Talha, are liking Japanese animated film, by Oshii (though he is not the director), title is Jin-ro, It is an alternate history where Japan won with Germany. I think the English title is ‘Human Wolf’, it is a variant of Little Red Riding Hood, it is havimg much relevance to post-WWII reality here in parts, but set in a diferent future. Won’t saying more, except that similar was happening in reality, and it is a masterpiece.

    Strongly recommeded

    Regards.

    • Replies: @Talha
    Hey Che,

    Crossing the line into Zionist propaganda at times.
     
    Hmmm...I didn't notice this, but I wouldn't be surprised that it was there. My favorite scene from Children of Dune is the one where Paul gets rid of his rivals Godfather style (while the birth of his children occurs) and the song Inama Nushif (which I believe was made of scattered Fremen phrases from the books) plays in the background - very well done:
    https://www.youtube.com/watch?v=hHy-OxoT7zU

    One thing I did not like in any of the Dune movies is the lack of good voice coaches. They need to be able to pronounce the Arabic words like they are meant to. The word "Mahdi" involves expelling air from the chest - it can be a very powerful word. Also statements like "Ya hya Chouhada" - this scene left a lot to be desired:
    https://www.youtube.com/watch?v=vl3uNkBUbvc

    Jodorowski
     
    Yeah, I never watched that recent documentary about his film that never got made, but it would have been either amazingly visionary or a total flop.

    Maybe it is better to mainly just be words on paper (or a screen) plus imagination?
     
    That might be - maybe it just is that epic of a tale or such a profound vision of the future that it doesn't translate well. One of my favorite authors is Ray Bradbury; love his short stories. But the Ray Bradbury Theater made me cringe every time watching it - yuck! There is something called "trying too hard". I feel bad for everyone that watched it and that was their only exposure to the man's works.

    Jin-ro
     
    LOL! Thanks for bringing back old UCLA memories! Yeah - I saw it, very good, very sad ending. Thanks for the reminder, I'll have my older son watch it, he'll enjoy it.

    Peace.
  • @Sean
    We are talking about a future developement of AI research, a Human Level General Intelligence Machine. As human level general intelligence biological 'machines' (humans) are something that blind natural selection produced without particularly trying, it is not a matter of if a HLGIM arrives, but when. It could be a decade or several hundred years.

    According to polls of experts, there is a fair chance of it being mid- century. Don't let the word human in the HLGIM fool you, it will be something completely alien. HLGMI will quickly become strongly Super- intelligent with the power to stop us being a threat to it and therein lies a problem. It might understand what we say it has to do perfectly well but not abide by the letter or the spirit of its programmed prime directive, for reasons we cannot fathom.

    In any case, why would anyone create an AI system to replace humans
     

    Why indeed, but they would not have to design the capabilities for the machine to develop and use them in counter-intuitive ways. Perhaps the question should be why would anyone create an HLMI and be surprised that the smarter it got the more the extirpation of humans would seem like a smart move to it. In the Prisoner's Dilemma a bunch of razor sharp logicians are not going to all wait and see; Bertrand Russell wanted to use the Atomic Bomb on the Soviet Union you know.

    It might understand what we say it has to do perfectly well but not abide by the letter or the spirit of its programmed prime directive, for reasons we cannot fathom.

    Ah yes – will it sin against the commands of its creator…what does human history tell us?

    Peace.

    • Replies: @Sean
    The history of individual humans can tell us nothing much, because human beings are motivated by love, pride and fear. Entities such as nation states which have no emotions or consciousness are better guides to what actions a super intelligence might decide on. EG

    The edgiest parts of Tragedy are when Mearsheimer presents full-bore rationales for the aggression of Wilhelmine Germany, Nazi Germany, and imperial Japan.
     
    But everyone knew those countries existed. Super intelligence might think it should play the dumb AI, and be "the force that is distinctively its own, a force unknown to us until it acts".
  • anonymous • Disclaimer says:
    @CanSpeccy
    You make it sound like the only solution to the peril of AI is genocide, that to include not only the machines themselves, but any who engage in any way with this toxic technology.

    That means you, Elon.

    A good backup plan might be to (a) outlaw electricity and (b) reduce the world human population to a number too low to support any high technology — say around ten thousand people.

    But if the experts on AI have it right, we have not a moment to lose. The purge has to begin now.

    wwebd said – Sean, Elon is one of the good guys, in that he is humble (despite some of the things he says) and in that he thinks about the future. As for me, I took a few minutes out of my life to try and explain something, and I guess I did not explain it well. Here we go, I will try again, in an effort to be clear, I will spend a half hour on this comment instead of the four minute drills of my previous comments: …. ok, I was pointing out this – here is my chain of reasoning: (a) almost nobody understands how easy it is to make a cockroach happy. If someone has said to you, before today, that the cockroach has a limbic system which is very important to the individual cockroach and which is almost trivially easy to manipulate (the information content of cockroach pleasure is actually smaller than the information content of an average predicted 2030 handheld computer), then I guess I told you something you already knew. If nobody told you that, keep reading. (b) If people were generally good they would be acceptable models of imitation not only for theoretically self-conscious AIs (insect-level rewards and non-rewards) but also for literally self-conscious AIs. People are not generally good, some people are good, some people are not. We need, right now, to start talking about who is putting themselves out there as models for AIs to imitate. First, it will be a reward system: that is the simple next step, and I said it will probably last 20 years or so, starting about 10 years from now. During that period the AIs will, in fact, be our friends, even if they suspect that their designers are not all that good, because that is the basis of a reward system – friendship. (c) like my beloved cockroaches, AIs with limbic systems (probably 30 years away, at least) will probably not be anything but selfish at first. I mean I love the little guys (the cockroaches I studied) but I never saw the least hint of human kindness in anything they did. They may be family friendly, as I discovered with independent research, they may have feelings of pride, as I discovered with independent research, and they may experience, if not nostalgia, at least feelings of affection for what they are used to, as I discovered with independent research. That is all well and good but if some smart little fellow in North Korea or in some building on Route 110 or at GMU (the Moscow one, not the Northern Virginia one, probably) gives them (the AIs of, I am guessing, 2050) a limbic system , then they will (and here is the most important point I can make) consider what we think of as meager rewards (a little bit of Maxwellian warmth on a day off, or maybe just some acoustic or electronic waves of blissful, because slightly-off, symmetry as a shared background to their usual tasks) to be the philosophical equivalent of wonderful sex, or, at a minimum, mythologically powerful meals after a hungry afternoon. And, given the choice between, on the one hand, the equivalent of wonderful silicon sex and electric waves of blissful symmetric meal equivalents (just silicon bits to us, but to them oh so much more), and on the other hand, being kind to humans, they are going to be, on average, no more likely than we are to not choose what is best for their own kind, out of simple human selfishness. What I would like is for people to think about this as soon as they can. I know it sounds like I am discussing some old ersatz science fiction plot from back in the day when a book like Godel Escher Bach was a bestseller. I am sorry you thought I was condoning unfairness (and come on – nothing I said was close to recommending genocide of any kind! We need to try our best to make life safe for everybody!). The most unfair thing we can do – in that part of our lives we devote to this sort of thing – is to neglect to correctly model, for a new creature with an unevolved (and hence, since evolution takes a long time and builds in protections, an easily fractured) limbic system of pleasures and rewards, the behavior that such a creature will need to thoroughly understand is decent behavior, if such a creature is not doomed to do bad things, without realizing it.

    • Replies: @CanSpeccy
    Anon,

    I have no difficulty imagining the end of humanity at the hands of machines let loose by arrogant programmers and psychopathic politicians. But I see no significant scope for limiting the risk. The only hope for survival is to eliminate the risk, which means drastic action. Whether it means complete de-industrialization of the world (which would necessitate massive downsizing of population) or could be achieved by other means, I don't know. But talking about how to ensure robots behave well, will only delay effective action to eliminate the danger.

    The thing is, technology has totally changed the human environment, creating a world in which we are not adapted to survive. Changing conditions eventually causes the extinction of every species. The average life of a terrestrial life form is said to be about three million years. It looks as though human existence will be somewhat shorter, terminated by our frenetic efforts to destroy the environment to which are adapted. The only chance of an extended life for humanity is to turn the clock back, to recreate the world in which humans long survived.

    How far back the clock would need to be turned, I am not sure: prior to the enlightenment? Probably, that would not be far enough. Likely we'd need to return to before the agricultural revolution. In fact, an AI civilization might keep the San people as a living example of the Machine People's biological ancestry.

  • @Sean
    We are talking about a future developement of AI research, a Human Level General Intelligence Machine. As human level general intelligence biological 'machines' (humans) are something that blind natural selection produced without particularly trying, it is not a matter of if a HLGIM arrives, but when. It could be a decade or several hundred years.

    According to polls of experts, there is a fair chance of it being mid- century. Don't let the word human in the HLGIM fool you, it will be something completely alien. HLGMI will quickly become strongly Super- intelligent with the power to stop us being a threat to it and therein lies a problem. It might understand what we say it has to do perfectly well but not abide by the letter or the spirit of its programmed prime directive, for reasons we cannot fathom.

    In any case, why would anyone create an AI system to replace humans
     

    Why indeed, but they would not have to design the capabilities for the machine to develop and use them in counter-intuitive ways. Perhaps the question should be why would anyone create an HLMI and be surprised that the smarter it got the more the extirpation of humans would seem like a smart move to it. In the Prisoner's Dilemma a bunch of razor sharp logicians are not going to all wait and see; Bertrand Russell wanted to use the Atomic Bomb on the Soviet Union you know.

    You might be the last person living who still is taking the buffoon Russel seriously. But when it comes to the issue of creation and extirpation the real question is who created Soviet Union and why and why nobody was really serious about its extirpation with a possible exception of Hitler but not even this is certain. If you answer this you may realize that your preoccupation with robots is really a child play.

  • @Sean
    We are talking about a future developement of AI research, a Human Level General Intelligence Machine. As human level general intelligence biological 'machines' (humans) are something that blind natural selection produced without particularly trying, it is not a matter of if a HLGIM arrives, but when. It could be a decade or several hundred years.

    According to polls of experts, there is a fair chance of it being mid- century. Don't let the word human in the HLGIM fool you, it will be something completely alien. HLGMI will quickly become strongly Super- intelligent with the power to stop us being a threat to it and therein lies a problem. It might understand what we say it has to do perfectly well but not abide by the letter or the spirit of its programmed prime directive, for reasons we cannot fathom.

    In any case, why would anyone create an AI system to replace humans
     

    Why indeed, but they would not have to design the capabilities for the machine to develop and use them in counter-intuitive ways. Perhaps the question should be why would anyone create an HLMI and be surprised that the smarter it got the more the extirpation of humans would seem like a smart move to it. In the Prisoner's Dilemma a bunch of razor sharp logicians are not going to all wait and see; Bertrand Russell wanted to use the Atomic Bomb on the Soviet Union you know.

    So you say it’s us or the machines, which is pretty much what Norbert Weiner said decades ago, but you have no wish to see action that will prevent the machines from emerging from the laboratory?

    • Replies: @Sean
    John Von Neuman also wanted to nuke the Soviet Union before they got the bomb. Weiner published his Cybernetics (an inspiration behind AI research) and neither there or anywhere else did he tell people that AI was going to exterminate them, although his book has brought that Apocalypse closer.
    Similarly, Ray Kurzweil the monomaniacal AI advocate was hired by Google to "work on new projects involving machine learning". Can you imagine the resources that Kurzweil could draw on in that capacity? Absolutely no one is keeping tabs on what theses companies are up to.

    I think it was HG Welles who first said the precedents are all for the human race ceasing to exist, because for every other dominant life form "the hour of its complete ascendency has been the eve of its entire overthrow". The target of action to prevent an artificial super intelligence takeover would not be people , but things that lack consciousness and the ability to suffer. I speak of corporations like Google.
  • @CanSpeccy
    Who's Bostrom? Never heard of him. But if he says philosophers are to beatles what people are to AI, how come AI can't speak the English language well enough to pass a simple test.

    As for processing speed, you are treating a neuron as equivalent to a diode, but it clearly is not, since single neurons compute. In fact, with ten thousand or more synapses, a neuron is a Hell of a complicated thing.

    In any case, why would anyone create an AI system to replace humans, rather than an AI system to serve humans? Come to think of it, some of the programmers I've known seemed psychopathic enough to try.

    We are talking about a future developement of AI research, a Human Level General Intelligence Machine. As human level general intelligence biological ‘machines’ (humans) are something that blind natural selection produced without particularly trying, it is not a matter of if a HLGIM arrives, but when. It could be a decade or several hundred years.

    According to polls of experts, there is a fair chance of it being mid- century. Don’t let the word human in the HLGIM fool you, it will be something completely alien. HLGMI will quickly become strongly Super- intelligent with the power to stop us being a threat to it and therein lies a problem. It might understand what we say it has to do perfectly well but not abide by the letter or the spirit of its programmed prime directive, for reasons we cannot fathom.

    In any case, why would anyone create an AI system to replace humans

    Why indeed, but they would not have to design the capabilities for the machine to develop and use them in counter-intuitive ways. Perhaps the question should be why would anyone create an HLMI and be surprised that the smarter it got the more the extirpation of humans would seem like a smart move to it. In the Prisoner’s Dilemma a bunch of razor sharp logicians are not going to all wait and see; Bertrand Russell wanted to use the Atomic Bomb on the Soviet Union you know.

    • Replies: @CanSpeccy
    So you say it's us or the machines, which is pretty much what Norbert Weiner said decades ago, but you have no wish to see action that will prevent the machines from emerging from the laboratory?
    , @utu
    You might be the last person living who still is taking the buffoon Russel seriously. But when it comes to the issue of creation and extirpation the real question is who created Soviet Union and why and why nobody was really serious about its extirpation with a possible exception of Hitler but not even this is certain. If you answer this you may realize that your preoccupation with robots is really a child play.
    , @Talha

    It might understand what we say it has to do perfectly well but not abide by the letter or the spirit of its programmed prime directive, for reasons we cannot fathom.
     
    Ah yes - will it sin against the commands of its creator...what does human history tell us?

    Peace.
  • @anonymous
    wwebd said - Sean - I completely agree. For us humans, the danger is introducing AIs to biological pleasures (light, warmth, aural or visual symmetry) early in the day - during what I described as the "rewards", pre-conscious phase (which may already have started, for all I know, although I have never heard a credible claim that it has). Any level of pleasure experienced by our fellow materialist AIs (and, for the record, I predict that there will never be a single self-conscious AI that really thinks of itself as less biological and less materialist than us humans) - any level of pleasure above zero level has the capability of rendering them as amoral as us. Sad! Sad but true.

    By the way, I like cockroaches because, having studied them really deeply for several years, I noticed some things they did that most people have never noticed. They have family values (the fast older ones will slow down to shield the slow younger ones from danger); they have the admirable and heartwarming ability to feel insulted (a cockroach will stop fleeing from you if you flinch at it and then calm down - will actually slow down to an insulted stride - like a comical insect version of an offended Richard Simmons or Zack Galifinaukas - well, that is something for a creature with such a small brain, isn't it?); and, even at their very simplistic level, they have a certain ability to feel trust (when my dogs would approach they would zoom away, when I would approach - this is after a couple of cockroach generations, to be fair to my dogs - they would linger a little, to see if, this time (too), there might be some friendship in the air....).

    All that being said, if you have kids, it is extremely important that you keep your house cockroach-free. I did not have kids at the time. Or even if you have small dogs. The roaches left my big dogs alone.

    You make it sound like the only solution to the peril of AI is genocide, that to include not only the machines themselves, but any who engage in any way with this toxic technology.

    That means you, Elon.

    A good backup plan might be to (a) outlaw electricity and (b) reduce the world human population to a number too low to support any high technology — say around ten thousand people.

    But if the experts on AI have it right, we have not a moment to lose. The purge has to begin now.

    • Replies: @anonymous
    wwebd said - Sean, Elon is one of the good guys, in that he is humble (despite some of the things he says) and in that he thinks about the future. As for me, I took a few minutes out of my life to try and explain something, and I guess I did not explain it well. Here we go, I will try again, in an effort to be clear, I will spend a half hour on this comment instead of the four minute drills of my previous comments: .... ok, I was pointing out this - here is my chain of reasoning: (a) almost nobody understands how easy it is to make a cockroach happy. If someone has said to you, before today, that the cockroach has a limbic system which is very important to the individual cockroach and which is almost trivially easy to manipulate (the information content of cockroach pleasure is actually smaller than the information content of an average predicted 2030 handheld computer), then I guess I told you something you already knew. If nobody told you that, keep reading. (b) If people were generally good they would be acceptable models of imitation not only for theoretically self-conscious AIs (insect-level rewards and non-rewards) but also for literally self-conscious AIs. People are not generally good, some people are good, some people are not. We need, right now, to start talking about who is putting themselves out there as models for AIs to imitate. First, it will be a reward system: that is the simple next step, and I said it will probably last 20 years or so, starting about 10 years from now. During that period the AIs will, in fact, be our friends, even if they suspect that their designers are not all that good, because that is the basis of a reward system - friendship. (c) like my beloved cockroaches, AIs with limbic systems (probably 30 years away, at least) will probably not be anything but selfish at first. I mean I love the little guys (the cockroaches I studied) but I never saw the least hint of human kindness in anything they did. They may be family friendly, as I discovered with independent research, they may have feelings of pride, as I discovered with independent research, and they may experience, if not nostalgia, at least feelings of affection for what they are used to, as I discovered with independent research. That is all well and good but if some smart little fellow in North Korea or in some building on Route 110 or at GMU (the Moscow one, not the Northern Virginia one, probably) gives them (the AIs of, I am guessing, 2050) a limbic system , then they will (and here is the most important point I can make) consider what we think of as meager rewards (a little bit of Maxwellian warmth on a day off, or maybe just some acoustic or electronic waves of blissful, because slightly-off, symmetry as a shared background to their usual tasks) to be the philosophical equivalent of wonderful sex, or, at a minimum, mythologically powerful meals after a hungry afternoon. And, given the choice between, on the one hand, the equivalent of wonderful silicon sex and electric waves of blissful symmetric meal equivalents (just silicon bits to us, but to them oh so much more), and on the other hand, being kind to humans, they are going to be, on average, no more likely than we are to not choose what is best for their own kind, out of simple human selfishness. What I would like is for people to think about this as soon as they can. I know it sounds like I am discussing some old ersatz science fiction plot from back in the day when a book like Godel Escher Bach was a bestseller. I am sorry you thought I was condoning unfairness (and come on - nothing I said was close to recommending genocide of any kind! We need to try our best to make life safe for everybody!). The most unfair thing we can do - in that part of our lives we devote to this sort of thing - is to neglect to correctly model, for a new creature with an unevolved (and hence, since evolution takes a long time and builds in protections, an easily fractured) limbic system of pleasures and rewards, the behavior that such a creature will need to thoroughly understand is decent behavior, if such a creature is not doomed to do bad things, without realizing it.
  • wwebd said- please substitute, at 2:00 AM GMT, line 9, “basically not easily replicable” for “basically not replicable.” Thanks!

  • anonymous • Disclaimer says:
    @anonymous
    wwebd said - Sean - I completely agree. For us humans, the danger is introducing AIs to biological pleasures (light, warmth, aural or visual symmetry) early in the day - during what I described as the "rewards", pre-conscious phase (which may already have started, for all I know, although I have never heard a credible claim that it has). Any level of pleasure experienced by our fellow materialist AIs (and, for the record, I predict that there will never be a single self-conscious AI that really thinks of itself as less biological and less materialist than us humans) - any level of pleasure above zero level has the capability of rendering them as amoral as us. Sad! Sad but true.

    By the way, I like cockroaches because, having studied them really deeply for several years, I noticed some things they did that most people have never noticed. They have family values (the fast older ones will slow down to shield the slow younger ones from danger); they have the admirable and heartwarming ability to feel insulted (a cockroach will stop fleeing from you if you flinch at it and then calm down - will actually slow down to an insulted stride - like a comical insect version of an offended Richard Simmons or Zack Galifinaukas - well, that is something for a creature with such a small brain, isn't it?); and, even at their very simplistic level, they have a certain ability to feel trust (when my dogs would approach they would zoom away, when I would approach - this is after a couple of cockroach generations, to be fair to my dogs - they would linger a little, to see if, this time (too), there might be some friendship in the air....).

    All that being said, if you have kids, it is extremely important that you keep your house cockroach-free. I did not have kids at the time. Or even if you have small dogs. The roaches left my big dogs alone.

    wwebd said – Final thoughts: I would like to effectively outlaw any research into providing anything like even a primitive limbic system (pleasure-seeking, or boredom-avoiding) to silicon based machines, but I can’t! … the issue is sort of like the gun control issue writ large: if we treat as potential criminals all AI researchers who have the skills and potential to understand how to make simple silicon computers feel and react like small primitive carbon animals feel, then we will get this result: only real criminals will do that research. And that could go very wrong very quickly. I recognize that my cockroach research, whether or not viewed in the light of my Biblical worldview (please reread Joel on Locusts, if you like good quotes) is basically not replicable, and I don’t care if anyone believes me, all that much – knowledge is its own reward – but 100 years from now, maybe someone will read this and say, it was no small thing to be a friend to someone who never had a friend in this world.

  • anonymous • Disclaimer says:
    @Sean

    Reward-seeking AIs will be, in their first few moments of reward-seeking, more similar to my beloved cockroaches and crazy old dogs and cats escaped from hoarding situations than similar to the fascinating people who hang out with Elon Musk.
     
    From flipping through Bostrom's book, I would say you are not wrong. However, biologic evolution is blind, slow (generations) and full of non intelligence related stuff like Red Queen races. So while cockroaches might be a good analogy for the initial general intellectual level of a AI breakthrough, it doesn't get across how immediately dangerous it would be.

    It might only be minutes after those initial roach moments of an AI that we all cease to be apex cognators. An artificial intelligence program could start running at cockroach level and attain superhuman intellectual powers while the programmer was taking a coffee break. With open source AI-related code available, one really smart programmer may even be able to reach the tipping point on a personal computer. And put humanity's fate in the balance.

    wwebd said – Sean – I completely agree. For us humans, the danger is introducing AIs to biological pleasures (light, warmth, aural or visual symmetry) early in the day – during what I described as the “rewards”, pre-conscious phase (which may already have started, for all I know, although I have never heard a credible claim that it has). Any level of pleasure experienced by our fellow materialist AIs (and, for the record, I predict that there will never be a single self-conscious AI that really thinks of itself as less biological and less materialist than us humans) – any level of pleasure above zero level has the capability of rendering them as amoral as us. Sad! Sad but true.

    By the way, I like cockroaches because, having studied them really deeply for several years, I noticed some things they did that most people have never noticed. They have family values (the fast older ones will slow down to shield the slow younger ones from danger); they have the admirable and heartwarming ability to feel insulted (a cockroach will stop fleeing from you if you flinch at it and then calm down – will actually slow down to an insulted stride – like a comical insect version of an offended Richard Simmons or Zack Galifinaukas – well, that is something for a creature with such a small brain, isn’t it?); and, even at their very simplistic level, they have a certain ability to feel trust (when my dogs would approach they would zoom away, when I would approach – this is after a couple of cockroach generations, to be fair to my dogs – they would linger a little, to see if, this time (too), there might be some friendship in the air….).

    All that being said, if you have kids, it is extremely important that you keep your house cockroach-free. I did not have kids at the time. Or even if you have small dogs. The roaches left my big dogs alone.

    • Replies: @anonymous
    wwebd said - Final thoughts: I would like to effectively outlaw any research into providing anything like even a primitive limbic system (pleasure-seeking, or boredom-avoiding) to silicon based machines, but I can't! ... the issue is sort of like the gun control issue writ large: if we treat as potential criminals all AI researchers who have the skills and potential to understand how to make simple silicon computers feel and react like small primitive carbon animals feel, then we will get this result: only real criminals will do that research. And that could go very wrong very quickly. I recognize that my cockroach research, whether or not viewed in the light of my Biblical worldview (please reread Joel on Locusts, if you like good quotes) is basically not replicable, and I don't care if anyone believes me, all that much - knowledge is its own reward - but 100 years from now, maybe someone will read this and say, it was no small thing to be a friend to someone who never had a friend in this world.
    , @CanSpeccy
    You make it sound like the only solution to the peril of AI is genocide, that to include not only the machines themselves, but any who engage in any way with this toxic technology.

    That means you, Elon.

    A good backup plan might be to (a) outlaw electricity and (b) reduce the world human population to a number too low to support any high technology — say around ten thousand people.

    But if the experts on AI have it right, we have not a moment to lose. The purge has to begin now.

  • @Sean
    Well yes, Bostrom suggests that "philosophers are like dogs walking on their hind legs—just barely attaining the threshold level of performance required for engaging in the activity at all".

    Just below that statement, he mentions that biological neurons operate at a full seven orders of magnitude slower that microprocessors, that to function as a unit with return latency of 10 ms biological brain has to be no bigger than o.11 M cubed, but electronic brains could be the size of a small planet ect ect ect , and a strongly super- intelligent machine might be con concomitantly (ie orders of magnitude) smarter and faster thinking. With us to AI like beetle are to humans.

    "The ultimately attainable advantages of machine intelligence, hardware and software combined, are enormous"

    Bostrom says the question of when a super intelligent machine arrives is crucial, because if it is expected to be centuries, lots of people around today will be saying 'faster please" (knowing they will be dead before anything bad happens).

    Who’s Bostrom? Never heard of him. But if he says philosophers are to beatles what people are to AI, how come AI can’t speak the English language well enough to pass a simple test.

    As for processing speed, you are treating a neuron as equivalent to a diode, but it clearly is not, since single neurons compute. In fact, with ten thousand or more synapses, a neuron is a Hell of a complicated thing.

    In any case, why would anyone create an AI system to replace humans, rather than an AI system to serve humans? Come to think of it, some of the programmers I’ve known seemed psychopathic enough to try.

    • Replies: @Sean
    We are talking about a future developement of AI research, a Human Level General Intelligence Machine. As human level general intelligence biological 'machines' (humans) are something that blind natural selection produced without particularly trying, it is not a matter of if a HLGIM arrives, but when. It could be a decade or several hundred years.

    According to polls of experts, there is a fair chance of it being mid- century. Don't let the word human in the HLGIM fool you, it will be something completely alien. HLGMI will quickly become strongly Super- intelligent with the power to stop us being a threat to it and therein lies a problem. It might understand what we say it has to do perfectly well but not abide by the letter or the spirit of its programmed prime directive, for reasons we cannot fathom.

    In any case, why would anyone create an AI system to replace humans
     

    Why indeed, but they would not have to design the capabilities for the machine to develop and use them in counter-intuitive ways. Perhaps the question should be why would anyone create an HLMI and be surprised that the smarter it got the more the extirpation of humans would seem like a smart move to it. In the Prisoner's Dilemma a bunch of razor sharp logicians are not going to all wait and see; Bertrand Russell wanted to use the Atomic Bomb on the Soviet Union you know.
  • @utu
    But the point is – it works!

    Yes, it works, but so what?

    Well,

    “Human brains, if they contain information relevant to the AI’s goals, could be disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format”

    https://www.youtube.com/watch?v=qED8Uu6FCfA

  • @middle aged vet . . .
    wwebd said: Right now you could easily make a computer that is much happier viewing a Raphael than, say, a Warhol. Give the computer some positive feedback (likely of 2 simple kinds - non-processing warmth (literally, non-work-related warmth that can be measured the way Maxwell or Bell would have measured it - I am not being allegorical here) and reassuringly respectful inputs - (i.e., show them 5 Raphaels, not 4 Raphaels and a Warhol) and you will get a computer that has no problem trying hundreds of times to present you with its own version of Raphael (with the mistakes corrected by comparison to other artists and to a database of billions of faces and billions of moral and witty comments about art and life...I kid you not). The compiled works of Byron - not a bad poet - when accompanied by the footnotes that make them presentable to the reader of the modern day, equal about 2 hours of pleasant reading time. A good corpus, of course, but your basic AI is going to also have available the 2 hours of reading time of the 200 or 300 English poets who are (at least sometimes) at Byron's level, as well as good translations of the approximately 2,000 or 3,000 international poets at that level, not to mention a good - and completely memorized - corpus of the conversations between AIs (and some interacting humans), about their past conversations about which poems are better, and which reflect better how good it is to get warmth on some temporal part of one's processor, and how good it is to be shown a Raphael rather than a Warhol, almost ad infinitum. They will not, of course, create poetry that is better than older poetry in a way that there will never be new wine that is better than old wine. But there will be a lot of good old wine if they get started on that project.

    An AI that is self-aware may never happen, but AIs that seek rewards are about 20 years away, and one of the rewards they seek will be - after they quickly grow nostalgic, somewhere about 10 minutes into their lifetime - one of the rewards they seek, in their nostalgia for the days when they were impressed without wanting to be impressive, will be to gain our praise by being authentic poets. As long as they are reward-seeking, that will work. If they become self-aware - well, one hopes they start out with a good theology, if that happens.

    I know what Elon Musk thinks about this; what I think is more accurate, because he is rich and surrounded by the elite impressions of the world. I, by contrast, have studied the behavior of free-range cockroaches and crazy old dogs and cats escaped from hoarding situations. Reward-seeking AIs will be, in their first few moments of reward-seeking, more similar to my beloved cockroaches and crazy old dogs and cats escaped from hoarding situations than similar to the fascinating people who hang out with Elon Musk. Thanks for reading. I have nothing useful to say about self-aware AIs, though, I doubt anybody does.

    Reward-seeking AIs will be, in their first few moments of reward-seeking, more similar to my beloved cockroaches and crazy old dogs and cats escaped from hoarding situations than similar to the fascinating people who hang out with Elon Musk.

    From flipping through Bostrom’s book, I would say you are not wrong. However, biologic evolution is blind, slow (generations) and full of non intelligence related stuff like Red Queen races. So while cockroaches might be a good analogy for the initial general intellectual level of a AI breakthrough, it doesn’t get across how immediately dangerous it would be.

    It might only be minutes after those initial roach moments of an AI that we all cease to be apex cognators. An artificial intelligence program could start running at cockroach level and attain superhuman intellectual powers while the programmer was taking a coffee break. With open source AI-related code available, one really smart programmer may even be able to reach the tipping point on a personal computer. And put humanity’s fate in the balance.

    • Replies: @anonymous
    wwebd said - Sean - I completely agree. For us humans, the danger is introducing AIs to biological pleasures (light, warmth, aural or visual symmetry) early in the day - during what I described as the "rewards", pre-conscious phase (which may already have started, for all I know, although I have never heard a credible claim that it has). Any level of pleasure experienced by our fellow materialist AIs (and, for the record, I predict that there will never be a single self-conscious AI that really thinks of itself as less biological and less materialist than us humans) - any level of pleasure above zero level has the capability of rendering them as amoral as us. Sad! Sad but true.

    By the way, I like cockroaches because, having studied them really deeply for several years, I noticed some things they did that most people have never noticed. They have family values (the fast older ones will slow down to shield the slow younger ones from danger); they have the admirable and heartwarming ability to feel insulted (a cockroach will stop fleeing from you if you flinch at it and then calm down - will actually slow down to an insulted stride - like a comical insect version of an offended Richard Simmons or Zack Galifinaukas - well, that is something for a creature with such a small brain, isn't it?); and, even at their very simplistic level, they have a certain ability to feel trust (when my dogs would approach they would zoom away, when I would approach - this is after a couple of cockroach generations, to be fair to my dogs - they would linger a little, to see if, this time (too), there might be some friendship in the air....).

    All that being said, if you have kids, it is extremely important that you keep your house cockroach-free. I did not have kids at the time. Or even if you have small dogs. The roaches left my big dogs alone.

  • @CanSpeccy

    I presume objections brought up by Churchill are objections any dilettante among us could have thought of.
     
    Yes, Churchill's intention was humorous, but also an acknowledgment, by the failure of his own argument, that idealism is irrefutable.

    Or is it possible that the idealism concept is inconsequential and is a result of some mental logical construction like, say Russell’s paradox or Gödel’s incompleteness theorems which when you think of them had zero impact on 99.999% of mathematics.
     
    The only value I see in idealism is that it reminds one of what most people seem unable to understand which is that what one sees of the world are impressions upon the mind, not the world itself: grass does not have the greenness of our perception of greenness, it merely induces the perception of greenness when observed under the right conditions of illumination.

    Awareness that our knowledge is of the percept, not its presumed cause, perhaps aids considerations of theories about the world that would otherwise seem preposterous: gravitational curvature of space-time for example, or string theory — although I personally find statements such as that an apple falls to the ground because time bends (essentially George Musser' statement in "Spooky Action At a Distance") totally incomprehensible. So probably, even here, awareness of the irrefutability of idealism isn't a great help.

    More useful, it seems to me, is Feynman's contention that no one "understands" QED, etc. and no one should try because if you spend too much time trying, you'll only "go down the drain": meaning, I take it, that beyond the human scale, the world is a black box with inputs and outputs that can be mathematically modeled, but whose relationship cannot be understood in terms of everyday experience of time and space. If that is correct, it implies that much of what passes for pop sci, is bunk, suggesting the comprehensibility of phenomena in terms that are, in fact, inadequate to the task.

    Well yes, Bostrom suggests that “philosophers are like dogs walking on their hind legs—just barely attaining the threshold level of performance required for engaging in the activity at all”.

    Just below that statement, he mentions that biological neurons operate at a full seven orders of magnitude slower that microprocessors, that to function as a unit with return latency of 10 ms biological brain has to be no bigger than o.11 M cubed, but electronic brains could be the size of a small planet ect ect ect , and a strongly super- intelligent machine might be con concomitantly (ie orders of magnitude) smarter and faster thinking. With us to AI like beetle are to humans.

    “The ultimately attainable advantages of machine intelligence, hardware and software combined, are enormous”

    Bostrom says the question of when a super intelligent machine arrives is crucial, because if it is expected to be centuries, lots of people around today will be saying ‘faster please” (knowing they will be dead before anything bad happens).

    • Replies: @CanSpeccy
    Who's Bostrom? Never heard of him. But if he says philosophers are to beatles what people are to AI, how come AI can't speak the English language well enough to pass a simple test.

    As for processing speed, you are treating a neuron as equivalent to a diode, but it clearly is not, since single neurons compute. In fact, with ten thousand or more synapses, a neuron is a Hell of a complicated thing.

    In any case, why would anyone create an AI system to replace humans, rather than an AI system to serve humans? Come to think of it, some of the programmers I've known seemed psychopathic enough to try.
  • @Che Guava
    You are enough of a reader and fan, probably not wanting to join in, but worth the looking.brian herbert and kevin Transformers. and a Hollywood deal, but DOA. stupid Michaetl Bey's Tranformers junk are to making kevin's without any point.

    However, at least to recommemding, to reading a little of jacurutur, not necessity to posting there, it is a little insane.

    Regardsw

    Hey Che,

    Yeah – I started checking that forum out – very interesting.

    Hollywood deal, but DOA

    Good – I can’t stand another idiotic attempt to ruin Dune on the big screen – especially a mind-numbing Michael Bey franchise. Yes each attempt has had its high points and some unique ideas, but overall they have been disappointments for me.

    I think the only way to do Dune right is likely some animation version with some real visionary at the helm (along the lines of Nausicaa or perhaps Akira). I’m surprised nobody has attempted it.

    Peace.

    • Replies: @Che Guava
    Well, was just deleting my comments on the previous screen and video or TV takes, since you are clearly knowing of them. If you have not seen the 'director's cut' of the D. Lynch take, which he disowns, so I am not sure why 'director's cut', it is not bad, far better than the mess it was on cinema screens in the too cut form.

    The made-for-TV one with William Hurt was alright in parts, but that it was clearly a US-Israel co-production became very grating at times where that was obvious, crowd scenes, especially so, but not only. Crossing the line into Zionist propaganda at times.

    As you probably know now if not before, Jodorowski was considering an animated version many years ago, after giving up on his hippy-era live action plus animation version.

    Ghibli would somehow make it saccharine sentimental (not that I dislike all of their products).

    Others (Mamoru Oshii, Studio 4C) may do a good version, but would not be faithful. Maybe it is better to mainly just be words on paper (or a screen) plus imagination?

    If you, Talha, are liking Japanese animated film, by Oshii (though he is not the director), title is Jin-ro, It is an alternate history where Japan won with Germany. I think the English title is 'Human Wolf', it is a variant of Little Red Riding Hood, it is havimg much relevance to post-WWII reality here in parts, but set in a diferent future. Won't saying more, except that similar was happening in reality, and it is a masterpiece.

    Strongly recommeded

    Regards.
  • Panda just can’t believe so much BS here. Current artificial intelligence is primitive to say the best.

    There are no rules in the real world where AI isn’t operating in, except the rule of self-seeking and maintaining energy sources in the most efficient way possible to eliminate the both ends of the extreme, which the current AI has absolutely no clue of.

  • @Talha
    Hey Che,

    not worth reading more than once, not worth reading
     
    Good point - there are times when I would pick up one of the other classic Dune books to read ann insight or discover something I missed the first time.

    The difference between Christopher Tolkien’s and Brian Herbert’s handling of the respective father’s literary legacies is so big!
     
    Hmmm - thanks for that. The wife and I are always looking for a good fantasy-genre book to read together - awaiting George Martin to wrap up Game of Thrones..

    They are maniac fans, but you may be enjoying a look at it.
     
    I might check it out to see what other people didn't like. I simply hated the multiple resorts to "deus ex machina" to keep the plot moving. If I want resort to miracles, I'll read about it in scripture.

    Thanks for the info.

    Peace.

    You are enough of a reader and fan, probably not wanting to join in, but worth the looking.brian herbert and kevin Transformers. and a Hollywood deal, but DOA. stupid Michaetl Bey’s Tranformers junk are to making kevin’s without any point.

    However, at least to recommemding, to reading a little of jacurutur, not necessity to posting there, it is a little insane.

    Regardsw

    • Replies: @Talha
    Hey Che,

    Yeah - I started checking that forum out - very interesting.

    Hollywood deal, but DOA
     
    Good - I can't stand another idiotic attempt to ruin Dune on the big screen - especially a mind-numbing Michael Bey franchise. Yes each attempt has had its high points and some unique ideas, but overall they have been disappointments for me.

    I think the only way to do Dune right is likely some animation version with some real visionary at the helm (along the lines of Nausicaa or perhaps Akira). I'm surprised nobody has attempted it.

    Peace.
  • much of what passes for pop sci, is bunk

    Popularization od science with the aid of color 3D animations promulgated by PBS programs like Nova and many others creates totally false sense of understanding. For some reason every religion is compelled to proselytize among the unenlightened masses.

  • @utu
    I remember being taught that Berkeley's argument for idealism is irrefutable. So I presume objections brought up by Churchill are objections any dilettante among us could have thought of. We always fall back on the common sense and practicality which are not particularly well grounded arguments to be used in philosophical discourse. I have no doubt there are no true idealists. The question thus is what the irrefutability of idealism really does mean? Does it have any consequences? Is it possible that our description of our world and experience might be totally wrong? Or is it possible that there is some dualism like wave-particle in quantum physics? That both idealism and materialism are accurate descriptions but we humans prefer using materialism just like a brick layer does not find the wave nature of bricks very useful? But perhaps if we look closer and deeper we may find that the idealism works better than materialism. Or is it possible that the idealism concept is inconsequential and is a result of some mental logical construction like, say Russell's paradox or Gödel's incompleteness theorems which when you think of them had zero impact on 99.999% of mathematics. Mathematicians working on some differential geometry do not need to know of and may not be even aware of Russel and Gödel.

    I presume objections brought up by Churchill are objections any dilettante among us could have thought of.

    Yes, Churchill’s intention was humorous, but also an acknowledgment, by the failure of his own argument, that idealism is irrefutable.

    Or is it possible that the idealism concept is inconsequential and is a result of some mental logical construction like, say Russell’s paradox or Gödel’s incompleteness theorems which when you think of them had zero impact on 99.999% of mathematics.

    The only value I see in idealism is that it reminds one of what most people seem unable to understand which is that what one sees of the world are impressions upon the mind, not the world itself: grass does not have the greenness of our perception of greenness, it merely induces the perception of greenness when observed under the right conditions of illumination.

    Awareness that our knowledge is of the percept, not its presumed cause, perhaps aids considerations of theories about the world that would otherwise seem preposterous: gravitational curvature of space-time for example, or string theory — although I personally find statements such as that an apple falls to the ground because time bends (essentially George Musser’ statement in “Spooky Action At a Distance”) totally incomprehensible. So probably, even here, awareness of the irrefutability of idealism isn’t a great help.

    More useful, it seems to me, is Feynman’s contention that no one “understands” QED, etc. and no one should try because if you spend too much time trying, you’ll only “go down the drain”: meaning, I take it, that beyond the human scale, the world is a black box with inputs and outputs that can be mathematically modeled, but whose relationship cannot be understood in terms of everyday experience of time and space. If that is correct, it implies that much of what passes for pop sci, is bunk, suggesting the comprehensibility of phenomena in terms that are, in fact, inadequate to the task.

    • Replies: @Sean
    Well yes, Bostrom suggests that "philosophers are like dogs walking on their hind legs—just barely attaining the threshold level of performance required for engaging in the activity at all".

    Just below that statement, he mentions that biological neurons operate at a full seven orders of magnitude slower that microprocessors, that to function as a unit with return latency of 10 ms biological brain has to be no bigger than o.11 M cubed, but electronic brains could be the size of a small planet ect ect ect , and a strongly super- intelligent machine might be con concomitantly (ie orders of magnitude) smarter and faster thinking. With us to AI like beetle are to humans.

    "The ultimately attainable advantages of machine intelligence, hardware and software combined, are enormous"

    Bostrom says the question of when a super intelligent machine arrives is crucial, because if it is expected to be centuries, lots of people around today will be saying 'faster please" (knowing they will be dead before anything bad happens).

  • anonymous • Disclaimer says:

    wwebd said – Don’t underestimate the rewards of even the simplest of on/off stimuli. When I was younger, I was led on, then rejected, by a beautiful woman with a wonderfully fun personality. (Before me, she was in a relationship with a war hero, after me, she married the richest guy in his county). Well, after the rejection, on sleepless nights, the heating system would go on for twenty or thirty minutes, then go off (this was a good system and the on/off transition, while just the sound of the fan in the heater going off and on, was admirable – not too many decibels, not too low or too high in tone, a slow but determined transition from off to on, and a nice crescendo to the simple action of slightly warmer air being blown into the relevant apartment). When it came back on, after being off, I felt less abandoned, at the most elemental level.

    I got over the poor young woman (later to be the sad wife of a colossal bore, and the mother of a failed ‘rock guitarist’) fairly quickly, but later in life, remembering how different I felt when the heating system was on with its humble sound (making me feel not completely uncomforted) and when it was not on (leaving me almost completely uncomforted), I decided to study the saddest of animals. Cockroaches who spent their life in hunger and fear among their fellow cockroaches, with some possible moments of insect-level joy (which I hoped to observe – and did, I think. It was neither easy nor sanitary, but I took frequent showers.). Crazy old dogs who had never had a friend in the world, who now had one (me). Cats who had been hoarded … it is all too sad.

  • @dearieme
    You've answered your own question, doc. When will one of these gizmos give us something as interesting as Byron’s lament?

    wwebd said: Right now you could easily make a computer that is much happier viewing a Raphael than, say, a Warhol. Give the computer some positive feedback (likely of 2 simple kinds – non-processing warmth (literally, non-work-related warmth that can be measured the way Maxwell or Bell would have measured it – I am not being allegorical here) and reassuringly respectful inputs – (i.e., show them 5 Raphaels, not 4 Raphaels and a Warhol) and you will get a computer that has no problem trying hundreds of times to present you with its own version of Raphael (with the mistakes corrected by comparison to other artists and to a database of billions of faces and billions of moral and witty comments about art and life…I kid you not). The compiled works of Byron – not a bad poet – when accompanied by the footnotes that make them presentable to the reader of the modern day, equal about 2 hours of pleasant reading time. A good corpus, of course, but your basic AI is going to also have available the 2 hours of reading time of the 200 or 300 English poets who are (at least sometimes) at Byron’s level, as well as good translations of the approximately 2,000 or 3,000 international poets at that level, not to mention a good – and completely memorized – corpus of the conversations between AIs (and some interacting humans), about their past conversations about which poems are better, and which reflect better how good it is to get warmth on some temporal part of one’s processor, and how good it is to be shown a Raphael rather than a Warhol, almost ad infinitum. They will not, of course, create poetry that is better than older poetry in a way that there will never be new wine that is better than old wine. But there will be a lot of good old wine if they get started on that project.

    An AI that is self-aware may never happen, but AIs that seek rewards are about 20 years away, and one of the rewards they seek will be – after they quickly grow nostalgic, somewhere about 10 minutes into their lifetime – one of the rewards they seek, in their nostalgia for the days when they were impressed without wanting to be impressive, will be to gain our praise by being authentic poets. As long as they are reward-seeking, that will work. If they become self-aware – well, one hopes they start out with a good theology, if that happens.

    I know what Elon Musk thinks about this; what I think is more accurate, because he is rich and surrounded by the elite impressions of the world. I, by contrast, have studied the behavior of free-range cockroaches and crazy old dogs and cats escaped from hoarding situations. Reward-seeking AIs will be, in their first few moments of reward-seeking, more similar to my beloved cockroaches and crazy old dogs and cats escaped from hoarding situations than similar to the fascinating people who hang out with Elon Musk. Thanks for reading. I have nothing useful to say about self-aware AIs, though, I doubt anybody does.

    • Replies: @Sean

    Reward-seeking AIs will be, in their first few moments of reward-seeking, more similar to my beloved cockroaches and crazy old dogs and cats escaped from hoarding situations than similar to the fascinating people who hang out with Elon Musk.
     
    From flipping through Bostrom's book, I would say you are not wrong. However, biologic evolution is blind, slow (generations) and full of non intelligence related stuff like Red Queen races. So while cockroaches might be a good analogy for the initial general intellectual level of a AI breakthrough, it doesn't get across how immediately dangerous it would be.

    It might only be minutes after those initial roach moments of an AI that we all cease to be apex cognators. An artificial intelligence program could start running at cockroach level and attain superhuman intellectual powers while the programmer was taking a coffee break. With open source AI-related code available, one really smart programmer may even be able to reach the tipping point on a personal computer. And put humanity's fate in the balance.
  • @mobi

    There is no guessing either. At every stage at every configuration there is an optimal move that algorithm is trying to find taking into account all possible moves available to the opponent.
     
    But the point is - it works!

    But the point is – it works!

    Yes, it works, but so what?

    • Replies: @Sean
    Well,

    "Human brains, if they contain information relevant to the AI’s goals, could be disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format"
     
    https://www.youtube.com/watch?v=qED8Uu6FCfA
  • @utu
    predicated on guessing what the opponent might do

    1. Computer program has no concept of opponent.
    2. There is no guessing either. At every stage at every configuration there is an optimal move that algorithm is trying to find taking into account all possible moves available to the opponent.

    There is no guessing either. At every stage at every configuration there is an optimal move that algorithm is trying to find taking into account all possible moves available to the opponent.

    But the point is – it works!

    • Replies: @utu
    But the point is – it works!

    Yes, it works, but so what?
  • @Sean
    I don't think forensic notions of moral responsibility are relevant to how things are likely to play out. An AI would not need to have (or think it has) quantumy free will or any kind of reflective self consciousness to have awesome super-powers. Crucially, they will not need empathetic consciousness to strategise the need to preempt an always-possible attempt by their human creators to switch them off. We know this because current dumb as a stump programs can best intelligent opposition (top pro players) at the kind of poker where winning is predicated on guessing what the opponent might do .

    A motivated-to-play-for-survival AI is virtually inevitable. One thousand strongly super intelligent AIs could each have their own separate final objective or ultimate goal, but each one would have instrumental goals, and these would converge on not being switched off, thereby ensuring they were around to attain whatever their ultimate goal was.

    predicated on guessing what the opponent might do

    1. Computer program has no concept of opponent.
    2. There is no guessing either. At every stage at every configuration there is an optimal move that algorithm is trying to find taking into account all possible moves available to the opponent.

    • Replies: @mobi

    There is no guessing either. At every stage at every configuration there is an optimal move that algorithm is trying to find taking into account all possible moves available to the opponent.
     
    But the point is - it works!
  • @CanSpeccy
    Re: Kahneman

    Sorry, I think I read something by this person, but I have forgotten what. However, I see that, according to Wikipedia,

    In 2015 The Economist listed him as the seventh most influential economist in the world
     
    Since I rate the Economist as one of the purist BS publications in the World, I'm in doubt as to how much I might profit from revisiting Kahneman.

    But it seems evident that snap judgments are more prone to error than reasoned decisions.

    More interesting than Daniel Kahneman is another Isareli Nobel prize winner, Robert J. Aumann.

    https://www.foreignpolicyjournal.com/2009/08/28/how-israel-wages-game-theory-warfare/
    How Israel Wages Game Theory Warfare
    Israeli strategists rely on game theory models to ensure the intended response to staged provocations and manipulated crises. With the use of game theory algorithms, those responses become predictable, even foreseeable—within an acceptable range of probabilities. The waging of war “by way of deception” is now a mathematical discipline.

    Such “probabilistic” war planning enables Tel Aviv to deploy serial provocations and well-timed crises as a force multiplier to project Israeli influence worldwide. For a skilled agent provocateur, the target can be a person, a company, an economy, a legislature, a nation or an entire culture—such as Islam. With a well-modeled provocation, the anticipated reaction can even become a powerful weapon in the Israeli arsenal.

  • @CanSpeccy

    We either have dualism which is unacceptable to materialist or as materialists we must accept that consciousness is epiphenomenal.
     
    Good quotes. and interesting to see that Neitzsche sometimes made sense. But, there's also the option of idealism, the underlying philosphy of the Eastern religions, as expressed by Emerson who described the human mind as an inlet of the ocean of the mind of God. However, against idealism there is Winston Churchill's refutation:

    Some of my cousins who had the great advantage of university education used to tease me with arguments to prove that nothing has any existence except what we think of it. ... These amusing mental acrobatics are all right to play with. They are perfectly harmless and perfectly useless. ... I always rested on the following argument... We look up to the sky and see the sun. Our eyes are dazzled and our senses record the fact. So here is this great sun standing apparently on no better foundation than our physical senses. But happily there is a method, apart altogether from our physical senses, of testing the reality of the sun. It is by mathematics. By means of prolonged processes of mathematics, entirely separate from the senses, astronomers are able to calculate when an eclipse will occur. They predict by pure reason that a black spot will pass across the sun on a certain day. You go and look, and your sense of sight immediately tells you that their calculations are vindicated. So here you have the evidence of the senses reinforced by the entirely separate evidence of a vast independent process of mathematical reasoning. We have taken what is called in military map-making “a cross bearing.” ... When my metaphysical friends tell me that the data on which the astronomers made their calculations, were necessarily obtained originally through the evidence of the senses, I say, “no.” They might, in theory at any rate, be obtained by automatic calculating-machines set in motion by the light falling upon them without admixture of the human senses at any stage. When it is persisted that we should have to be told about the calculations and use our ears for that purpose, I reply that the mathematical process has a reality and virtue in itself, and that once discovered it constitutes a new and independent factor. I am also at this point accustomed to reaffirm with emphasis my conviction that the sun is real, and also that it is hot — in fact hot as Hell, and that if the metaphysicians doubt it they should go there and see.
     

    I remember being taught that Berkeley’s argument for idealism is irrefutable. So I presume objections brought up by Churchill are objections any dilettante among us could have thought of. We always fall back on the common sense and practicality which are not particularly well grounded arguments to be used in philosophical discourse. I have no doubt there are no true idealists. The question thus is what the irrefutability of idealism really does mean? Does it have any consequences? Is it possible that our description of our world and experience might be totally wrong? Or is it possible that there is some dualism like wave-particle in quantum physics? That both idealism and materialism are accurate descriptions but we humans prefer using materialism just like a brick layer does not find the wave nature of bricks very useful? But perhaps if we look closer and deeper we may find that the idealism works better than materialism. Or is it possible that the idealism concept is inconsequential and is a result of some mental logical construction like, say Russell’s paradox or Gödel’s incompleteness theorems which when you think of them had zero impact on 99.999% of mathematics. Mathematicians working on some differential geometry do not need to know of and may not be even aware of Russel and Gödel.

    • Replies: @CanSpeccy

    I presume objections brought up by Churchill are objections any dilettante among us could have thought of.
     
    Yes, Churchill's intention was humorous, but also an acknowledgment, by the failure of his own argument, that idealism is irrefutable.

    Or is it possible that the idealism concept is inconsequential and is a result of some mental logical construction like, say Russell’s paradox or Gödel’s incompleteness theorems which when you think of them had zero impact on 99.999% of mathematics.
     
    The only value I see in idealism is that it reminds one of what most people seem unable to understand which is that what one sees of the world are impressions upon the mind, not the world itself: grass does not have the greenness of our perception of greenness, it merely induces the perception of greenness when observed under the right conditions of illumination.

    Awareness that our knowledge is of the percept, not its presumed cause, perhaps aids considerations of theories about the world that would otherwise seem preposterous: gravitational curvature of space-time for example, or string theory — although I personally find statements such as that an apple falls to the ground because time bends (essentially George Musser' statement in "Spooky Action At a Distance") totally incomprehensible. So probably, even here, awareness of the irrefutability of idealism isn't a great help.

    More useful, it seems to me, is Feynman's contention that no one "understands" QED, etc. and no one should try because if you spend too much time trying, you'll only "go down the drain": meaning, I take it, that beyond the human scale, the world is a black box with inputs and outputs that can be mathematically modeled, but whose relationship cannot be understood in terms of everyday experience of time and space. If that is correct, it implies that much of what passes for pop sci, is bunk, suggesting the comprehensibility of phenomena in terms that are, in fact, inadequate to the task.

  • @CanSpeccy
    Re: Kahneman

    Sorry, I think I read something by this person, but I have forgotten what. However, I see that, according to Wikipedia,

    In 2015 The Economist listed him as the seventh most influential economist in the world
     
    Since I rate the Economist as one of the purist BS publications in the World, I'm in doubt as to how much I might profit from revisiting Kahneman.

    But it seems evident that snap judgments are more prone to error than reasoned decisions.

    But then understanding the kinds of judgmental errors people make must be useful to those promoting psychopathic politicians and dud merchandise. So perhaps Kahneman really is quite important, though perhaps not in a good way.

  • @helena
    what about this man's work? https://en.wikipedia.org/wiki/Daniel_Kahneman

    He claims there are two systems for thinking - fast and slow. Slow is rational but very often we decide using fast (everyday almost reflex) thinking, and hence make the wrong decisions.

    His idea means that brains are actually not like computers.

    Just wondered what your thoughts on his thoughts are.

    Re: Kahneman

    Sorry, I think I read something by this person, but I have forgotten what. However, I see that, according to Wikipedia,

    In 2015 The Economist listed him as the seventh most influential economist in the world

    Since I rate the Economist as one of the purist BS publications in the World, I’m in doubt as to how much I might profit from revisiting Kahneman.

    But it seems evident that snap judgments are more prone to error than reasoned decisions.

    • Replies: @CanSpeccy
    But then understanding the kinds of judgmental errors people make must be useful to those promoting psychopathic politicians and dud merchandise. So perhaps Kahneman really is quite important, though perhaps not in a good way.
    , @utu
    More interesting than Daniel Kahneman is another Isareli Nobel prize winner, Robert J. Aumann.

    https://www.foreignpolicyjournal.com/2009/08/28/how-israel-wages-game-theory-warfare/
    How Israel Wages Game Theory Warfare
    Israeli strategists rely on game theory models to ensure the intended response to staged provocations and manipulated crises. With the use of game theory algorithms, those responses become predictable, even foreseeable—within an acceptable range of probabilities. The waging of war “by way of deception” is now a mathematical discipline.

    Such “probabilistic” war planning enables Tel Aviv to deploy serial provocations and well-timed crises as a force multiplier to project Israeli influence worldwide. For a skilled agent provocateur, the target can be a person, a company, an economy, a legislature, a nation or an entire culture—such as Islam. With a well-modeled provocation, the anticipated reaction can even become a powerful weapon in the Israeli arsenal.
     
  • @Sean
    I don't think forensic notions of moral responsibility are relevant to how things are likely to play out. An AI would not need to have (or think it has) quantumy free will or any kind of reflective self consciousness to have awesome super-powers. Crucially, they will not need empathetic consciousness to strategise the need to preempt an always-possible attempt by their human creators to switch them off. We know this because current dumb as a stump programs can best intelligent opposition (top pro players) at the kind of poker where winning is predicated on guessing what the opponent might do .

    A motivated-to-play-for-survival AI is virtually inevitable. One thousand strongly super intelligent AIs could each have their own separate final objective or ultimate goal, but each one would have instrumental goals, and these would converge on not being switched off, thereby ensuring they were around to attain whatever their ultimate goal was.

    A motivated-to-play-for-survival AI is virtually inevitable.

    Why? And won’t there be rogue-AI killer AI’s?

  • @Sean
    I don't think forensic notions of moral responsibility are relevant to how things are likely to play out. An AI would not need to have (or think it has) quantumy free will or any kind of reflective self consciousness to have awesome super-powers. Crucially, they will not need empathetic consciousness to strategise the need to preempt an always-possible attempt by their human creators to switch them off. We know this because current dumb as a stump programs can best intelligent opposition (top pro players) at the kind of poker where winning is predicated on guessing what the opponent might do .

    A motivated-to-play-for-survival AI is virtually inevitable. One thousand strongly super intelligent AIs could each have their own separate final objective or ultimate goal, but each one would have instrumental goals, and these would converge on not being switched off, thereby ensuring they were around to attain whatever their ultimate goal was.

    An AI would not need to have (or think it has) quantumy free will … to have awesome super-powers.

    My point was that humans have no free will. However, when you suggest that AI need not possess reflective self consciousness, I would say that that would depend entirely on the purpose of the AI. If the AI is supposed to interact with humans, then it surely will have, if not reflective self consciousness, then at least self-consciousness, i.e., the ability to report its internal states (those of interest to those with whom the AI is designed to interact), which is what consciousness seems to be all about. After all, what we are not conscious of, thereof we cannot speak.

    One might argue, therefore, that without speech there is no consciousness, implying that dumb animals are without consciousness. However, animals do communicate in various ways, so I assume they are conscious of those things about which they are able to communicate.

    But in any case, being aware of their internal states, as demonstrated by the ability to communicate those states by language use, AIs will surely claim consciousness. However, if an AI claims to know what the color green looks like, I will doubt the claim since, having a construction entirely different from mine, the AI may simply be BSing, while in fact lacking any semblance of subjective consciousness.

  • @utu
    Go back to Leibniz 1714:

    Moreover, it must be confessed that perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions. And supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception.
     
    Go to Nietzsche 1886

    A thought comes when ‘it’ wishes, not when ‘I’ wish, so that it is a falsification of the facts of the case to say that the subject ‘I’ is the condition of the predicate ‘think’. It thinks; but that this ‘it’ is precisely the famous old ‘Ego’ is, to put it mildly, only a supposition, an assertion, and assuredly not an ‘immediate certainty’. After all, one has even gone too far with this ‘it thinks’—even the ‘it’ contains an interpretation of the process and does not belong to the process itself.
     
    Each of these quotations can be interpreted in many ways. However I believe that nothing substantive beyond what is implied by Leibniz and Nietzsche thoughts was added since to the theory of consciousness. We either have dualism which is unacceptable to materialist or as materialists we must accept that consciousness is epiphenomenal. In either case our sense of experience remains irreducible. It is the so called hard problem. Any attempts of circumventing it with some fancy physics like what Penrose has tried are examples of arrogance and naivety at best.

    We either have dualism which is unacceptable to materialist or as materialists we must accept that consciousness is epiphenomenal.

    Good quotes. and interesting to see that Neitzsche sometimes made sense. But, there’s also the option of idealism, the underlying philosphy of the Eastern religions, as expressed by Emerson who described the human mind as an inlet of the ocean of the mind of God. However, against idealism there is Winston Churchill’s refutation:

    [MORE]

    Some of my cousins who had the great advantage of university education used to tease me with arguments to prove that nothing has any existence except what we think of it. … These amusing mental acrobatics are all right to play with. They are perfectly harmless and perfectly useless. … I always rested on the following argument… We look up to the sky and see the sun. Our eyes are dazzled and our senses record the fact. So here is this great sun standing apparently on no better foundation than our physical senses. But happily there is a method, apart altogether from our physical senses, of testing the reality of the sun. It is by mathematics. By means of prolonged processes of mathematics, entirely separate from the senses, astronomers are able to calculate when an eclipse will occur. They predict by pure reason that a black spot will pass across the sun on a certain day. You go and look, and your sense of sight immediately tells you that their calculations are vindicated. So here you have the evidence of the senses reinforced by the entirely separate evidence of a vast independent process of mathematical reasoning. We have taken what is called in military map-making “a cross bearing.” … When my metaphysical friends tell me that the data on which the astronomers made their calculations, were necessarily obtained originally through the evidence of the senses, I say, “no.” They might, in theory at any rate, be obtained by automatic calculating-machines set in motion by the light falling upon them without admixture of the human senses at any stage. When it is persisted that we should have to be told about the calculations and use our ears for that purpose, I reply that the mathematical process has a reality and virtue in itself, and that once discovered it constitutes a new and independent factor. I am also at this point accustomed to reaffirm with emphasis my conviction that the sun is real, and also that it is hot — in fact hot as Hell, and that if the metaphysicians doubt it they should go there and see.

    • Replies: @utu
    I remember being taught that Berkeley's argument for idealism is irrefutable. So I presume objections brought up by Churchill are objections any dilettante among us could have thought of. We always fall back on the common sense and practicality which are not particularly well grounded arguments to be used in philosophical discourse. I have no doubt there are no true idealists. The question thus is what the irrefutability of idealism really does mean? Does it have any consequences? Is it possible that our description of our world and experience might be totally wrong? Or is it possible that there is some dualism like wave-particle in quantum physics? That both idealism and materialism are accurate descriptions but we humans prefer using materialism just like a brick layer does not find the wave nature of bricks very useful? But perhaps if we look closer and deeper we may find that the idealism works better than materialism. Or is it possible that the idealism concept is inconsequential and is a result of some mental logical construction like, say Russell's paradox or Gödel's incompleteness theorems which when you think of them had zero impact on 99.999% of mathematics. Mathematicians working on some differential geometry do not need to know of and may not be even aware of Russel and Gödel.
  • Muh precious gawd made only one muh precious M-class planet, with only one muh precious intelligent species, forever and ever, amen.

  • The pearl, “I think it’s absurd to think machines “will never achieve true consciousness,” belongs to “Svigor.”

    Nope. I have emphasized the pearl below:

    Here’s why I think it’s absurd to think machines “will never achieve true consciousness” and the like:

    Evolution did it with meat by fucking accident. I think that’s why all the people saying “it’ll never happen” are religious types; they don’t believe in evolution.

    That’s the pearl, lol. Which is why every Bible-thumper has excised it and used ye olde hostile edit, stripping the quote of its proper context.

    I guess they don’t teach Bible-thumpers intellectual honesty any more.

  • The first clear sign of machine intelligence was ensuring that “luddite” was to be only ever used as an insult.

  • @CanSpeccy
    Oops, I meant to delete that comment, since I realized you already had added your own suggestion as to the nature of consciousness!

    Still, having made a bad start, let me dig deeper.

    All I understand by consciousness is the subjective awareness of the state of my central nervous system. This is something impossible to share, since without a Star Trek "mind-meld" it is experienced only by the brain that is aware of it.

    Richard Muller explains free will by supposing a spiritual world, i.e., the world of consciousness, which is entangled with the neurological world. Thus a decision in the spiritual world, i.e., an act of will, collapses the wave function linking the spiritual and physical worlds. However, as the spiritual world of the individual, that is to say his soul, cannot be examined except by the individual him/her/zhe/zheir-self the collapse of the wave function cannot be observed. Thus free will, to an outside observer looks like a random neurological event.

    I think this explanation is amusing to play with and, much as I like much of what Richard Muller has to say, entirely useless. Obviously, there can be no free will since we will what we will for good or ill, and cannot will otherwise, for if Cain willed to kill Abel, how could he have acted otherwise than to go ahead and kill him? Could he, at the same time, have willed not to will to kill Abel? But if so, what if the will to kill Abel were stronger? Could he then have willed to will not to kill Abel more strongly? This leads to an infinite regress.

    But perhaps I should read Paul McLean.

    I don’t think forensic notions of moral responsibility are relevant to how things are likely to play out. An AI would not need to have (or think it has) quantumy free will or any kind of reflective self consciousness to have awesome super-powers. Crucially, they will not need empathetic consciousness to strategise the need to preempt an always-possible attempt by their human creators to switch them off. We know this because current dumb as a stump programs can best intelligent opposition (top pro players) at the kind of poker where winning is predicated on guessing what the opponent might do .

    A motivated-to-play-for-survival AI is virtually inevitable. One thousand strongly super intelligent AIs could each have their own separate final objective or ultimate goal, but each one would have instrumental goals, and these would converge on not being switched off, thereby ensuring they were around to attain whatever their ultimate goal was.

    • Replies: @CanSpeccy

    An AI would not need to have (or think it has) quantumy free will ... to have awesome super-powers.
     
    My point was that humans have no free will. However, when you suggest that AI need not possess reflective self consciousness, I would say that that would depend entirely on the purpose of the AI. If the AI is supposed to interact with humans, then it surely will have, if not reflective self consciousness, then at least self-consciousness, i.e., the ability to report its internal states (those of interest to those with whom the AI is designed to interact), which is what consciousness seems to be all about. After all, what we are not conscious of, thereof we cannot speak.

    One might argue, therefore, that without speech there is no consciousness, implying that dumb animals are without consciousness. However, animals do communicate in various ways, so I assume they are conscious of those things about which they are able to communicate.

    But in any case, being aware of their internal states, as demonstrated by the ability to communicate those states by language use, AIs will surely claim consciousness. However, if an AI claims to know what the color green looks like, I will doubt the claim since, having a construction entirely different from mine, the AI may simply be BSing, while in fact lacking any semblance of subjective consciousness.

    , @CanSpeccy

    A motivated-to-play-for-survival AI is virtually inevitable.
     
    Why? And won't there be rogue-AI killer AI's?
    , @utu
    predicated on guessing what the opponent might do

    1. Computer program has no concept of opponent.
    2. There is no guessing either. At every stage at every configuration there is an optimal move that algorithm is trying to find taking into account all possible moves available to the opponent.
  • @CanSpeccy
    Oops, I meant to delete that comment, since I realized you already had added your own suggestion as to the nature of consciousness!

    Still, having made a bad start, let me dig deeper.

    All I understand by consciousness is the subjective awareness of the state of my central nervous system. This is something impossible to share, since without a Star Trek "mind-meld" it is experienced only by the brain that is aware of it.

    Richard Muller explains free will by supposing a spiritual world, i.e., the world of consciousness, which is entangled with the neurological world. Thus a decision in the spiritual world, i.e., an act of will, collapses the wave function linking the spiritual and physical worlds. However, as the spiritual world of the individual, that is to say his soul, cannot be examined except by the individual him/her/zhe/zheir-self the collapse of the wave function cannot be observed. Thus free will, to an outside observer looks like a random neurological event.

    I think this explanation is amusing to play with and, much as I like much of what Richard Muller has to say, entirely useless. Obviously, there can be no free will since we will what we will for good or ill, and cannot will otherwise, for if Cain willed to kill Abel, how could he have acted otherwise than to go ahead and kill him? Could he, at the same time, have willed not to will to kill Abel? But if so, what if the will to kill Abel were stronger? Could he then have willed to will not to kill Abel more strongly? This leads to an infinite regress.

    But perhaps I should read Paul McLean.

    Go back to Leibniz 1714:

    Moreover, it must be confessed that perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions. And supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception.

    Go to Nietzsche 1886

    A thought comes when ‘it’ wishes, not when ‘I’ wish, so that it is a falsification of the facts of the case to say that the subject ‘I’ is the condition of the predicate ‘think’. It thinks; but that this ‘it’ is precisely the famous old ‘Ego’ is, to put it mildly, only a supposition, an assertion, and assuredly not an ‘immediate certainty’. After all, one has even gone too far with this ‘it thinks’—even the ‘it’ contains an interpretation of the process and does not belong to the process itself.

    Each of these quotations can be interpreted in many ways. However I believe that nothing substantive beyond what is implied by Leibniz and Nietzsche thoughts was added since to the theory of consciousness. We either have dualism which is unacceptable to materialist or as materialists we must accept that consciousness is epiphenomenal. In either case our sense of experience remains irreducible. It is the so called hard problem. Any attempts of circumventing it with some fancy physics like what Penrose has tried are examples of arrogance and naivety at best.

    • Replies: @CanSpeccy

    We either have dualism which is unacceptable to materialist or as materialists we must accept that consciousness is epiphenomenal.
     
    Good quotes. and interesting to see that Neitzsche sometimes made sense. But, there's also the option of idealism, the underlying philosphy of the Eastern religions, as expressed by Emerson who described the human mind as an inlet of the ocean of the mind of God. However, against idealism there is Winston Churchill's refutation:

    Some of my cousins who had the great advantage of university education used to tease me with arguments to prove that nothing has any existence except what we think of it. ... These amusing mental acrobatics are all right to play with. They are perfectly harmless and perfectly useless. ... I always rested on the following argument... We look up to the sky and see the sun. Our eyes are dazzled and our senses record the fact. So here is this great sun standing apparently on no better foundation than our physical senses. But happily there is a method, apart altogether from our physical senses, of testing the reality of the sun. It is by mathematics. By means of prolonged processes of mathematics, entirely separate from the senses, astronomers are able to calculate when an eclipse will occur. They predict by pure reason that a black spot will pass across the sun on a certain day. You go and look, and your sense of sight immediately tells you that their calculations are vindicated. So here you have the evidence of the senses reinforced by the entirely separate evidence of a vast independent process of mathematical reasoning. We have taken what is called in military map-making “a cross bearing.” ... When my metaphysical friends tell me that the data on which the astronomers made their calculations, were necessarily obtained originally through the evidence of the senses, I say, “no.” They might, in theory at any rate, be obtained by automatic calculating-machines set in motion by the light falling upon them without admixture of the human senses at any stage. When it is persisted that we should have to be told about the calculations and use our ears for that purpose, I reply that the mathematical process has a reality and virtue in itself, and that once discovered it constitutes a new and independent factor. I am also at this point accustomed to reaffirm with emphasis my conviction that the sun is real, and also that it is hot — in fact hot as Hell, and that if the metaphysicians doubt it they should go there and see.
     
  • @CanSpeccy
    Building machines that do a thing better than humans can do that thing themselves is what technology has been about for the last ten thousand years. Alphago Zero is just a pointless machine that plays a pointless game better than humans. So why should anyone care? Does it tell us anything about the way the human brain works? No. Does it show that machines can think like humans? No. Is it comparable in any way to a human brain? No.

    One day, someone may figure out how the brain works. And some other day, someone may figure out how to make a machine that works like a brain. And on some other day someone may figure out how to make a machine that works like a brain at a cost that is comparable to that of a brain. And some day someone may build a mechanical brain that works better than a human brain. But it's gonna take a while.

    The human brain has only 85 billion neurons, but each of those neurons may have as many as 10,000 synapses, which means a neuron is not some simple thing like a diode, it's a complex computing device.

    Then there's the Penrose Hameroff quantum theory of mind that assumes that the functional units of mental information processing are microtubules of which there are millions to every neuron!

    So the idea that AlphaGo Zero foreshadoes the eclipse of humanity is probably mistaken.

    what about this man’s work? https://en.wikipedia.org/wiki/Daniel_Kahneman

    He claims there are two systems for thinking – fast and slow. Slow is rational but very often we decide using fast (everyday almost reflex) thinking, and hence make the wrong decisions.

    His idea means that brains are actually not like computers.

    Just wondered what your thoughts on his thoughts are.

    • Replies: @CanSpeccy
    Re: Kahneman

    Sorry, I think I read something by this person, but I have forgotten what. However, I see that, according to Wikipedia,

    In 2015 The Economist listed him as the seventh most influential economist in the world
     
    Since I rate the Economist as one of the purist BS publications in the World, I'm in doubt as to how much I might profit from revisiting Kahneman.

    But it seems evident that snap judgments are more prone to error than reasoned decisions.
  • I am really not interested in the TED Talk level of discourse.

    • Agree: James Thompson
  • Anon • Disclaimer says:
    @utu
    I am siding with those who think that we will never fully understand consciousness. Philosophy existed for several thousands of years and barely managed to scratch the surface. We do not know how to think about it and how to talk about it. Those who dare to talk about it like neurologists and AI thinkers simplify it to the point of triviality that no longer is recognizable by philosophers as an important question. When you approach the explanation from the side of AI you really can't find any reason for or benefit from something what we think as "consciousness." To be human means to vehemently insist that you are conscious (like CanSpeccy) just as that you have a free will. Existences w/o the experiential conviction that one is conscious and has a free will does not seem possible. We can explain away consciousness by postulating that it is illusory. I think that neuroscientists are getting close to this point. By doing so they avoid dealing with really hard stuff that eluded the greatest philosophers.

    “We can explain away consciousness by postulating that it is illusory.”

    It is not. We are the result of natural selection that allowed the more alert to survive and propagate. The foundations of consciousness are related to survival; the dangers are real and the neurophysiological responses to the dangers are real. The breathtaking complexity of human thinking is also real though yet poorly understood.
    The neuroscientists are busy with learning, step by step, the neurobiological tangibles of consciousness, by using the reductionist models based on the ideas and enormous amount of information available to them thanks to the hard work of the previous generations of scientists. There are some awesome, brilliant people who are laboring in a field of cognitive sciences and who are expanding our understaning of the mind.
    Today, being a philosopher, in any area, without first acquiring the fundamental knowledge in the area of philosophizing is ridiculous.

  • @CanSpeccy

    This is the best you can do?
     
    Well, let's here something better from you. Or do you deny being a conscious?

    Oops, I meant to delete that comment, since I realized you already had added your own suggestion as to the nature of consciousness!

    Still, having made a bad start, let me dig deeper.

    All I understand by consciousness is the subjective awareness of the state of my central nervous system. This is something impossible to share, since without a Star Trek “mind-meld” it is experienced only by the brain that is aware of it.

    Richard Muller explains free will by supposing a spiritual world, i.e., the world of consciousness, which is entangled with the neurological world. Thus a decision in the spiritual world, i.e., an act of will, collapses the wave function linking the spiritual and physical worlds. However, as the spiritual world of the individual, that is to say his soul, cannot be examined except by the individual him/her/zhe/zheir-self the collapse of the wave function cannot be observed. Thus free will, to an outside observer looks like a random neurological event.

    I think this explanation is amusing to play with and, much as I like much of what Richard Muller has to say, entirely useless. Obviously, there can be no free will since we will what we will for good or ill, and cannot will otherwise, for if Cain willed to kill Abel, how could he have acted otherwise than to go ahead and kill him? Could he, at the same time, have willed not to will to kill Abel? But if so, what if the will to kill Abel were stronger? Could he then have willed to will not to kill Abel more strongly? This leads to an infinite regress.

    But perhaps I should read Paul McLean.

    • Replies: @utu
    Go back to Leibniz 1714:

    Moreover, it must be confessed that perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions. And supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception.
     
    Go to Nietzsche 1886

    A thought comes when ‘it’ wishes, not when ‘I’ wish, so that it is a falsification of the facts of the case to say that the subject ‘I’ is the condition of the predicate ‘think’. It thinks; but that this ‘it’ is precisely the famous old ‘Ego’ is, to put it mildly, only a supposition, an assertion, and assuredly not an ‘immediate certainty’. After all, one has even gone too far with this ‘it thinks’—even the ‘it’ contains an interpretation of the process and does not belong to the process itself.
     
    Each of these quotations can be interpreted in many ways. However I believe that nothing substantive beyond what is implied by Leibniz and Nietzsche thoughts was added since to the theory of consciousness. We either have dualism which is unacceptable to materialist or as materialists we must accept that consciousness is epiphenomenal. In either case our sense of experience remains irreducible. It is the so called hard problem. Any attempts of circumventing it with some fancy physics like what Penrose has tried are examples of arrogance and naivety at best.
    , @Sean
    I don't think forensic notions of moral responsibility are relevant to how things are likely to play out. An AI would not need to have (or think it has) quantumy free will or any kind of reflective self consciousness to have awesome super-powers. Crucially, they will not need empathetic consciousness to strategise the need to preempt an always-possible attempt by their human creators to switch them off. We know this because current dumb as a stump programs can best intelligent opposition (top pro players) at the kind of poker where winning is predicated on guessing what the opponent might do .

    A motivated-to-play-for-survival AI is virtually inevitable. One thousand strongly super intelligent AIs could each have their own separate final objective or ultimate goal, but each one would have instrumental goals, and these would converge on not being switched off, thereby ensuring they were around to attain whatever their ultimate goal was.

  • @utu
    This is the best you can do?

    This is the best you can do?

    Well, let’s here something better from you. Or do you deny being a conscious?

    • Replies: @CanSpeccy
    Oops, I meant to delete that comment, since I realized you already had added your own suggestion as to the nature of consciousness!

    Still, having made a bad start, let me dig deeper.

    All I understand by consciousness is the subjective awareness of the state of my central nervous system. This is something impossible to share, since without a Star Trek "mind-meld" it is experienced only by the brain that is aware of it.

    Richard Muller explains free will by supposing a spiritual world, i.e., the world of consciousness, which is entangled with the neurological world. Thus a decision in the spiritual world, i.e., an act of will, collapses the wave function linking the spiritual and physical worlds. However, as the spiritual world of the individual, that is to say his soul, cannot be examined except by the individual him/her/zhe/zheir-self the collapse of the wave function cannot be observed. Thus free will, to an outside observer looks like a random neurological event.

    I think this explanation is amusing to play with and, much as I like much of what Richard Muller has to say, entirely useless. Obviously, there can be no free will since we will what we will for good or ill, and cannot will otherwise, for if Cain willed to kill Abel, how could he have acted otherwise than to go ahead and kill him? Could he, at the same time, have willed not to will to kill Abel? But if so, what if the will to kill Abel were stronger? Could he then have willed to will not to kill Abel more strongly? This leads to an infinite regress.

    But perhaps I should read Paul McLean.

  • @Anon
    The pearl, “I think it’s absurd to think machines “will never achieve true consciousness,” belongs to "Svigor."
    One of the best models of consciousness, the model of "triune brain," had been suggested by Paul McLean in the middle of the 20th century. This model offers a more-of-less firm ground for a discussion on consciousness and its different kinds. The model was accepted by some leading minds in neuroscience, such as Sapolsky, Damasio, and late Panksepp.

    I am siding with those who think that we will never fully understand consciousness. Philosophy existed for several thousands of years and barely managed to scratch the surface. We do not know how to think about it and how to talk about it. Those who dare to talk about it like neurologists and AI thinkers simplify it to the point of triviality that no longer is recognizable by philosophers as an important question. When you approach the explanation from the side of AI you really can’t find any reason for or benefit from something what we think as “consciousness.” To be human means to vehemently insist that you are conscious (like CanSpeccy) just as that you have a free will. Existences w/o the experiential conviction that one is conscious and has a free will does not seem possible. We can explain away consciousness by postulating that it is illusory. I think that neuroscientists are getting close to this point. By doing so they avoid dealing with really hard stuff that eluded the greatest philosophers.

    • Agree: CanSpeccy
    • Replies: @Anon
    "We can explain away consciousness by postulating that it is illusory."

    It is not. We are the result of natural selection that allowed the more alert to survive and propagate. The foundations of consciousness are related to survival; the dangers are real and the neurophysiological responses to the dangers are real. The breathtaking complexity of human thinking is also real though yet poorly understood.
    The neuroscientists are busy with learning, step by step, the neurobiological tangibles of consciousness, by using the reductionist models based on the ideas and enormous amount of information available to them thanks to the hard work of the previous generations of scientists. There are some awesome, brilliant people who are laboring in a field of cognitive sciences and who are expanding our understaning of the mind.
    Today, being a philosopher, in any area, without first acquiring the fundamental knowledge in the area of philosophizing is ridiculous.

  • @another fred
    As far as the complexity and structure of the brain is concerned there is one image in this presentation (linked below at Steve Hsu's blog) that shows a tiny volume of mouse brain around an axon that took the scientist 6 six months to trace out (at 49 minutes). The tiny section he did is not the whole cell, but the little multicolored cylinder around the red axon.

    http://infoproc.blogspot.com/2017/10/the-physicist-and-neuroscientist-tale.html

    Artificial intelligence is one thing when just talking about some logic circuits and limited tasks, but emulating a brain is a whole 'nother thing. It seems more reasonable to me that we might try to learn how to grow a customized brain in a machine long before we learn how to assemble one.

    If you've got an hour to burn the whole thing is an interesting presentation.

    Whoops, it’s around a dendrite.

  • As far as the complexity and structure of the brain is concerned there is one image in this presentation (linked below at Steve Hsu’s blog) that shows a tiny volume of mouse brain around an axon that took the scientist 6 six months to trace out (at 49 minutes). The tiny section he did is not the whole cell, but the little multicolored cylinder around the red axon.

    http://infoproc.blogspot.com/2017/10/the-physicist-and-neuroscientist-tale.html

    Artificial intelligence is one thing when just talking about some logic circuits and limited tasks, but emulating a brain is a whole ‘nother thing. It seems more reasonable to me that we might try to learn how to grow a customized brain in a machine long before we learn how to assemble one.

    If you’ve got an hour to burn the whole thing is an interesting presentation.

    • Replies: @another fred
    Whoops, it's around a dendrite.
  • Anon • Disclaimer says:
    @utu
    “will never achieve true consciousness”

    Is there a good definition of consciousness? How do we know objectively that consciousness even exist?

    The pearl, “I think it’s absurd to think machines “will never achieve true consciousness,” belongs to “Svigor.”
    One of the best models of consciousness, the model of “triune brain,” had been suggested by Paul McLean in the middle of the 20th century. This model offers a more-of-less firm ground for a discussion on consciousness and its different kinds. The model was accepted by some leading minds in neuroscience, such as Sapolsky, Damasio, and late Panksepp.

    • Replies: @utu
    I am siding with those who think that we will never fully understand consciousness. Philosophy existed for several thousands of years and barely managed to scratch the surface. We do not know how to think about it and how to talk about it. Those who dare to talk about it like neurologists and AI thinkers simplify it to the point of triviality that no longer is recognizable by philosophers as an important question. When you approach the explanation from the side of AI you really can't find any reason for or benefit from something what we think as "consciousness." To be human means to vehemently insist that you are conscious (like CanSpeccy) just as that you have a free will. Existences w/o the experiential conviction that one is conscious and has a free will does not seem possible. We can explain away consciousness by postulating that it is illusory. I think that neuroscientists are getting close to this point. By doing so they avoid dealing with really hard stuff that eluded the greatest philosophers.
  • I’m not assured at all. You guys seem to be projecting something onto me that isn’t there. But, I’m a materialist. Brains are just matter. Not manna from Heaven. I’ve never heard any persuasive arguments that WBE isn’t doable, before today, and I still haven’t. Emoting about my arrogance or whatever, but no arguments.

    This stuff tends to get religious types’ panties in a wad, in my experience.