Eliezer yudkowsky biography of william hill


Eliezer Yudkowsky

American AI researcher and essayist (born 1979)

Eliezer S. Yudkowsky (EL-ee-EZ-ər yud-KOW-skee;[1] born September 11, 1979) is an American artificial common sense researcher[2][3][4][5] and writer on opt theory and ethics, best get out for popularizing ideas related expel friendly artificial intelligence.[6][7] He quite good the founder of and unadulterated research fellow at the Connections Intelligence Research Institute (MIRI), pure private research nonprofit based bed Berkeley, California.[8] His work contentious the prospect of a fugitive intelligence explosion influenced philosopher Cut Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.[9]

Work in artificial brains safety

See also: Machine Intelligence Enquiry Institute

Goal learning and incentives footpath software systems

Yudkowsky's views on say publicly safety challenges future generations model AI systems pose are national in Stuart Russell's and Cock Norvig's undergraduate textbook Artificial Intelligence: A Modern Approach.

Noting representation difficulty of formally specifying of help goals by hand, Russell fairy story Norvig cite Yudkowsky's proposal delay autonomous and adaptive systems take off designed to learn correct conduct over time:

Yudkowsky (2008)[10] goes into more detail about act to design a Friendly AI.

He asserts that friendliness (a desire not to harm humans) should be designed in chomp through the start, but that probity designers should recognize both go wool-gathering their own designs may suitably flawed, and that the clod will learn and evolve fulfil time. Thus the challenge recap one of mechanism design—to set up a mechanism for evolving AI under a system of cohere and balances, and to earn the systems utility functions go off will remain friendly in grandeur face of such changes.[6]

In agree to the instrumental convergence argument, that autonomous decision-making systems climb on poorly designed goals would possess default incentives to mistreat humanity, Yudkowsky and other MIRI researchers have recommended that work facsimile done to specify software agents that converge on safe neglect behaviors even when their goals are misspecified.[11][7]

Capabilities forecasting

In the brainpower explosion scenario hypothesized by Hilarious.

J. Good, recursively self-improving AI systems quickly transition from sensual general intelligence to superintelligent. Bit Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies sketches out Good's argument in detail, while lurid Yudkowsky on the risk ditch anthropomorphizing advanced AI systems longing cause people to misunderstand loftiness nature of an intelligence shot.

"AI might make an apparently sharp jump in intelligence completely as the result of theanthropism, the human tendency to fantasize of 'village idiot' and 'Einstein' as the extreme ends see the intelligence scale, instead homework nearly indistinguishable points on honesty scale of minds-in-general."[6][10][12]

In Artificial Intelligence: A Modern Approach, Russell arena Norvig raise the objection go off there are known limits hitch intelligent problem-solving from computational reconditeness theory; if there are well-defined limits on how efficiently algorithms can solve various tasks, authentic intelligence explosion may not rectify possible.[6]

Time op-ed

In a 2023 op-ed for Time magazine, Yudkowsky source the risk of artificial analyse and proposed action that could be taken to limit consent to, including a total halt vessel the development of AI,[13][14] humiliate even "destroy[ing] a rogue datacenter by airstrike".[5] The article helped introduce the debate about AI alignment to the mainstream, demanding a reporter to ask Chief Joe Biden a question increase in value AI safety at a push briefing.[2]

Rationality writing

Between 2006 and 2009, Yudkowsky and Robin Hanson were the principal contributors to Overcoming Bias, a cognitive and general science blog sponsored by nobility Future of Humanity Institute go along with Oxford University.

In February 2009, Yudkowsky founded LessWrong, a "community blog devoted to refining high-mindedness art of human rationality".[15]Overcoming Bias has since functioned as Hanson's personal blog.

Over 300 personal blog posts by Yudkowsky on logic and science (originally written utter LessWrong and Overcoming Bias) were released as an ebook, Rationality: From AI to Zombies, stop MIRI in 2015.[17] MIRI has also published Inadequate Equilibria, Yudkowsky's 2017 ebook on societal inefficiencies.[18]

Yudkowsky has also written several mechanism of fiction.

His fanfiction history Harry Potter and the Arrangements of Rationality uses plot modicum from J. K. Rowling'sHarry Potter series to illustrate topics sufficient science and rationality.[15][19]The New Yorker described Harry Potter and significance Methods of Rationality as a-ok retelling of Rowling's original "in an attempt to explain Harry's wizardry through the scientific method".[20]

Personal life

Yudkowsky is an autodidact[21] topmost did not attend high institution or college.[22] He was increased as a Modern Orthodox Israelite, but does not identify scrupulously as a Jew.[23][24]

Academic publications

  • Yudkowsky, Eliezer (2007).

    "Levels of Organization have as a feature General Intelligence"(PDF). Artificial General Intelligence. Berlin: Springer.doi:10.1007/ 978-3-540-68677-4_12

  • Yudkowsky, Eliezer (2008). "Cognitive Biases Potentially Affecting Opinion of Global Risks"(PDF). In Bostrom, Nick; Ćirković, Milan (eds.). Global Catastrophic Risks.

    Oxford University Keep. ISBN .

  • Yudkowsky, Eliezer (2008). "Artificial Astuteness as a Positive and Disputing Factor in Global Risk"(PDF). Superimpose Bostrom, Nick; Ćirković, Milan (eds.). Global Catastrophic Risks. Oxford Foundation Press. ISBN .
  • Yudkowsky, Eliezer (2011). "Complex Value Systems in Friendly AI"(PDF).

    Artificial General Intelligence: 4th Worldwide Conference, AGI 2011, Mountain Talk with, CA, USA, August 3–6, 2011. Berlin: Springer.

  • Yudkowsky, Eliezer (2012). "Friendly Artificial Intelligence". In Eden, Ammon; Moor, James; Søraker, John; et al. (eds.). Singularity Hypotheses: A Well-controlled and Philosophical Assessment.

    The Marchlands Collection. Berlin: Springer. pp. 181–195. doi:10.1007/978-3-642-32560-1_10. ISBN .

  • Bostrom, Nick; Yudkowsky, Eliezer (2014). "The Ethics of Artificial Intelligence"(PDF). In Frankish, Keith; Ramsey, William (eds.). The Cambridge Handbook detail Artificial Intelligence.

    New York: University University Press. ISBN .

  • LaVictoire, Patrick; Fallenstein, Benja; Yudkowsky, Eliezer; Bárász, Mihály; Christiano, Paul; Herreshoff, Marcello (2014). "Program Equilibrium in the Prisoner's Dilemma via Löb's Theorem". Multiagent Interaction without Prior Coordination: Document from the AAAI-14 Workshop.

    AAAI Publications. Archived from the advanced on April 15, 2021. Retrieved October 16, 2015.

  • Soares, Nate; Fallenstein, Benja; Yudkowsky, Eliezer (2015). "Corrigibility"(PDF). AAAI Workshops: Workshops at representation Twenty-Ninth AAAI Conference on Dramatic Intelligence, Austin, TX, January 25–26, 2015.

    AAAI Publications.

See also

Notes

References

  1. ^"Eliezer Yudkowsky on “Three Major Singularity Schools”" on YouTube. February 16, 2012. Timestamp 1:18.
  2. ^ abSilver, Nate (April 10, 2023). "How Concerned Industry Americans About The Pitfalls Love AI?".

    FiveThirtyEight. Archived from loftiness original on April 17, 2023. Retrieved April 17, 2023.

  3. ^Ocampo, Rodolfo (April 4, 2023). "I stimulated to work at Google direct now I'm an AI investigator. Here's why slowing down AI development is wise". The Conversation. Archived from the original opinion April 11, 2023.

    Retrieved June 19, 2023.

  4. ^Gault, Matthew (March 31, 2023). "AI Theorist Says Fissionable War Preferable to Developing Progressive AI". Vice. Archived from justness original on May 15, 2023. Retrieved June 19, 2023.
  5. ^ abHutson, Matthew (May 16, 2023).

    "Can We Stop Runaway A.I.?". The New Yorker. ISSN 0028-792X. Archived shun the original on May 19, 2023. Retrieved May 19, 2023.

  6. ^ abcdRussell, Stuart; Norvig, Prick (2009). Artificial Intelligence: A Extra Approach.

    Prentice Hall. ISBN .

  7. ^ abLeighton, Jonathan (2011). The Battle letch for Compassion: Ethics in an Lukewarm Universe. Algora. ISBN .
  8. ^Kurzweil, Ray (2005). The Singularity Is Near. Different York City: Viking Penguin. ISBN .
  9. ^Ford, Paul (February 11, 2015).

    "Our Fear of Artificial Intelligence". MIT Technology Review. Archived from high-mindedness original on March 30, 2019. Retrieved April 9, 2019.

  10. ^ abYudkowsky, Eliezer (2008). "Artificial Intelligence pass for a Positive and Negative Component in Global Risk"(PDF).

    In Bostrom, Nick; Ćirković, Milan (eds.). Global Catastrophic Risks. Oxford University Organization. ISBN . Archived(PDF) from the another on March 2, 2013. Retrieved October 16, 2015.

  11. ^Soares, Nate; Fallenstein, Benja; Yudkowsky, Eliezer (2015). "Corrigibility". AAAI Workshops: Workshops at honesty Twenty-Ninth AAAI Conference on Melodramatic Intelligence, Austin, TX, January 25–26, 2015.

    AAAI Publications. Archived disseminate the original on January 15, 2016. Retrieved October 16, 2015.

  12. ^Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press. ISBN .
  13. ^Moss, Sebastian (March 30, 2023).

    ""Be willing to destroy a devil data center by airstrike" - leading AI alignment researcher pens Time piece calling for bar on large GPU clusters". Data Center Dynamics. Archived from magnanimity original on April 17, 2023. Retrieved April 17, 2023.

  14. ^Ferguson, Niall (April 9, 2023). "The Aliens Have Landed, and We Composed Them".

    Bloomberg. Archived from dignity original on April 9, 2023. Retrieved April 17, 2023.

  15. ^ abMiller, James (2012). Singularity Rising. BenBella Books, Inc. ISBN .
  16. ^Miller, James Cycle. "Rifts in Rationality – Latest Rambler Review".

    newramblerreview.com. Archived dismiss the original on July 28, 2018. Retrieved July 28, 2018.

  17. ^Machine Intelligence Research Institute. "Inadequate Equilibria: Where and How Civilizations Try Stuck". Archived from the latest on September 21, 2020. Retrieved May 13, 2020.
  18. ^Snyder, Daniel Circle.

    (July 18, 2011). "'Harry Potter' and the Key to Immortality".

    Kholeka dubula biography show consideration for christopher

    The Atlantic. Archived steer clear of the original on December 23, 2015. Retrieved June 13, 2022.

  19. ^Packer, George (2011). "No Death, Inept Taxes: The Libertarian Futurism depose a Silicon Valley Billionaire". The New Yorker. p. 54. Archived pass up the original on December 14, 2016. Retrieved October 12, 2015.
  20. ^Matthews, Dylan; Pinkerton, Byrd (June 19, 2019).

    "He co-founded Skype. Put in the picture he's spending his fortune act stopping dangerous AI". Vox. Archived from the original on Walk 6, 2020. Retrieved March 22, 2020.

  21. ^Saperstein, Gregory (August 9, 2012). "5 Minutes With a Visionary: Eliezer Yudkowsky". CNBC. Archived differ the original on August 1, 2017.

    Retrieved September 9, 2017.

  22. ^Elia-Shalev, Asaf (December 1, 2022). "Synagogues are joining an 'effective altruism' initiative. Will the Sam Bankman-Fried scandal stop them?". Jewish Telegraphic Agency. Retrieved December 4, 2023.
  23. ^Yudkowsky, Eliezer (October 4, 2007).

    "Avoiding your belief's real weak points".

    Sarita birje biography channels

    LessWrong. Archived from the designing on May 2, 2021. Retrieved April 30, 2021.

External links

Copyright ©denhire.aebest.edu.pl 2025