The Singularity Summit at Stanford, May 13th
The Singularity Summit at Stanford is coming up next month:
What, then, is the singularity? It's a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian or dystopian, this epoch will transform the concepts that we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself. Understanding the singularity will alter our perspective on the significance of our past and the ramifications for our future. To truly understand it inherently changes one's view of life in general and one's own particular life.
The Singularity Institute is the principal driving force behind the event, and as such you can expect a strong focus on general artificial intelligence (GAI) and Vingean or Kurzweilian views of the technological singularity. The interest here is, at root, in methologies for overcoming limits to exponential growth that result from the limits of the human mind. This is all of little direct relevance to the near future of healthy life extension and advancing medical technology - as it will take place while the development of GAI is still in its earliest stages - but it is of great relevance to the mid- and long-term future of all human endeavors. Tools that improve our ability to manage complexity will greatly speed the advance of biotechnology, a science that is already bumping up against the limits imposed by our ability to manage and understand vast datasets and complex biological systems.
The Singularity Institute is worth a closer look by those interested in successful advocacy for a cause. Their transformation from a force-of-will personal venture to professional advocacy and position group has been quite impressive since Tyler Emerson took the reins, culminating in acquiring Peter Thiel as an advisor and patron. I can't overemphasis the cachet that brings in monied circles; Thiel is widely regarded as having an excellent sense of what to invest in and where the technology world will be going next. Don't let that obscure the small, important details for you, however - take some time to watch how the Institute does things if you're interested in getting ahead in advocacy.
UPDATE: Tyler Emerson emails to note:
You wrote, "The Singularity Institute is the principal driving force behind the event,..." While that's partially accurate, this isn't an SIAI event -- it's a Stanford event organized by the Symbolic Systems Program, and co-sponsored by the Singularity Institute, KurzweilAI.net, and the Stanford Transhumanist Association.You might note that Thiel became involved in part because he was very impressed by our vision and objectives, and especially by Yudkowsky's work. As it stands, the post could be interpreted as if I were solely responsible for Thiel's involvement, which isn't accurate.
The Singularity Institute seems set to successfully emulate the Foresight model in bringing interest and investment into their field, and the best of luck to them. Now if we could just garner a few more organizations like that for the healthy life extension cause to complement and compete with the Methuselah Foundation...
Technorati tags: advocacy, AI, singularity
"Although neither utopian or dystopian"
The singularity will be, almost decidedly, either utopian or dystopian. And the members of the Singularity Institute seem far too confident that the singularity will be the former, and not the latter.
???
Kip, have you talked to Eliezer, Michael Wilson, any of the Singularitarians on ImmInst, etc., in the last few years? I know you have, so why are you saying that we're too confident the Singularity will be beneficial? Everyone is horribly afraid that the Singularity will be a disaster, and we spend a lot of our time arguing that the chances are higher that it will go wrong than right.
You're obviously much more familiar with the Institute than I am. And so I'm inclined to defer to your better judgment. But my passing experience with them, and some of the things I've read, suggest that many or most members are both enthusiastic and optimistic about the Singularity. Furthermore, for people who dread the Singularity, you spend a tremendous amount of time and energy dedicated to accelerating it. I understand that the sooner a friendly-AI-conscious group like the Institute brings about the Singularity, the less likely a not-friend-AI-conscious group is to do so. But, if the Singularity is more likely than not to be a disaster, why hasten its arrival? At best, the odds of whether hastening it or postponing it is better seem indeterminate. Yet the Singularity Institute seems convinced, or obsessed, with making this dreadful event happen. Is there not the slightest tension between your enthusiasm and your pessimism?
In general terms, we want to pursue work that increases the probability of Friendly AI and a predictably acceptable intelligence explosition. One way to learn whether Friendly AI is viable is to develop what's required -- knowledge, skills, contacts, awareness, legitimacy, and so on -- to hire and fund extremely gifted researchers to study these interdisciplinary, difficult problems. We have ideas on how this should be done, and we're pursuing them. Feel free to express your ideas.
I see a sufficient theoretical foundation for the design pattern of recursive self improvement. Since I don't want to see the transition to that design pattern negatively affect human lives, I stand guilty in my desire to decrease that outcome.
The notion of "optimism about the Singularity" is foreign as all hell to me, and I'm regularly accused of general 'optimism.'
"Yet the Singularity Institute seems convinced, or obsessed, with making this dreadful event happen."
This is an absurd statement. You have completely misunderstood our intent, which is to understand whether Friendly AI is viable as the means of ensuring an acceptable intelligence explosion, and, if the research tells us that Friendly AI is predictably viable, to create Friendly AI. Please, criticize how we pursue that intent, and, better yet, suggest how *you* would pursue it; but don't make blatantly inaccurate statements about the nature of that intent.
Tyler,
I think you're being most uncharitable to me about the impression I have of the Institute. You suggest that the Institute's intention is merely conditional: *if* a friendly Singularity is "predictably viable" (and how much confidence would that require?), then, and only then, would it work towards bringing about the Singularity. And perhaps this is the Institute's goal. But is that really the impression the Institute gives? Just the most cursory search through the Institute's website produces statements such as the following:
"Why is the Singularity worth doing?"
"The Singularity is something that we can actually go out and do, not a philosophical way of describing something that inevitably happens to humanity."
"In that sense, sparking the Singularity is no different from any other grand challenge - someone has to do it."
"Ideally, Earth would have a Singularity movement around the same size as, say, the environmentalist movement, the earlier civil-rights movement, and so on. Why not? The stakes are that large and larger. But what actually exists, at this moment in time, is a tiny handful of people who realize what's going on and are trying to do something about it. It is not quite true that if you don't do it, no one will, but the pool of other people who will do it if you don't is smaller than you might think. If you're fortunate enough to be one of the few people who currently know what the Singularity is and would like to see it happen - even if you learned about the Singularity just now - we need your help because there aren't many people like you."
"The Singularity Institute exists to carry out the mission of the Singularity-aware - to accelerate the arrival of the Singularity in order to hasten its human benefits; to close the window of vulnerability that exists while humanity cannot increase its intelligence along with its technology; and to protect the integrity of the Singularity by ensuring that those projects which finally implement the Singularity are carried out in full awareness of the implications and without distraction from the responsibilities involved."
These are all quotes from:
http://www.singinst.org/why-singularity.html
As you see, none of these statement express any conditional logic or reticence about whether we should accelerate the singularity. Instead, they suggest exactly what I wrote earlier: "Yet the Singularity Institute seems convinced, or obsessed, with making this dreadful event happen." Perhaps this article, from the Institute's website, no longer represent its position. Perhaps I've somehow grossly misinterpreted statements such as "The Singularity Institute exists to carry out the mission of the Singularity-aware - to accelerate the arrival of the Singularity in order to hasten its human benefits." There is little doubt in my mind that, if I only invested more time in searching the SL4 archives, I could find far more telling, and unconditionally enthusiastic, statements from Institute members. But I just want to suggest that, if I have the wrong impression about the Institute's intentions, perhaps that is not entirely my fault.
Kip:
I believe you are getting hung up on the singularity concept or are again miconstuing our intent. Since this is still happening, I accept that the quoted content has given you a wrong impression of our objectives. Our purpose may be clearer if you replace "singularity" with "Friendly AI." Nevertheless, I hope you now have the impression that at this time we are not confident the intelligence explosition will be favorable, although I am also not confident it will not be favorable, but I have more confidence in this than in the former. This is an honest answer. I dislike this answer. I dislike the lack of funding and the lack of numerous, exceptional, sufficiently informed minds studying Friendly AI and existential risks. I would say I have nothing but contempt for this situation, but I don't understand the situation well enough to have that kind of conviction. Regardless, I am trying to change this situation since, all else equal, I think the chance of a favorable intelligence explosition will be increased. I trust you accept that recursive self improvement is an engineering challenge that someone some day will probably overcome, and that we will either face or succumb to the perceived dangers.
Regarding our site, some content is four or five years out of date. After the summit and our new hires, we hope to update our information; but only if that is the best use of our time. The best use of our time -- and your time, I believe -- should be decided by considering carefully how to "best" minimize net existential risk, which of course is a complex, critical, constant question that should be asked, studied, answered tentatively, and acted upon.
Regards,
Tyler
Reason: Is it not possible to edit posts once posted? I seem to be fond of typing "explosition" rather than "explosion." :)
Sorry, but no. That's what the preview button is for.
Tyler,
Yes, I understand that you're not confident that the intelligence explosion will be favorable. But I would note that, even if one substitutes "friendly AI" with "the Singularity", there is still no hint of reticence or conditional logic in those quotes. More importantly, there remains a curious tension between your enthusiasm and your pessimism. If the Singularity is going to be so dreadful, why not spend your efforts trying to delay it, even for a day? And if one more day pre-Singularity is not worth investing so much more time and money, why bother with activism at all?
To begin to answer these questions, it would be helpful to note a point of difference between us (and between Reason and me): I'm much more a fatalist and much more suspicious of activism, than you are. You wrote:
"I trust you accept that recursive self improvement is an engineering challenge that someone some day will probably overcome, and that we will either face or succumb to the perceived dangers."
Yes, I do agree with that. But from that premise, the conclusion does not follow that I should do anything about it. Consider this analogy: I accept that holding the 2008 Presidential Election is an challenge that someone some day will probably overcome, and that we will either face [it] or succumb to the perceived dangers. But I don't vote and, as best as I can tell, neither should any rational person who values their time. Even the narrowest Presidential election wins were overdetermined by far more than one vote, such that any one vote could have been excluded without changing the outcome in the slightest.
I don't try to accelerate, or delay, the Singularity (which to do?) for the same reason: my contribution would be so small as to be worthless. If I were Bill Gates, the story would be different. But I am not Bill Gates. The Institute's strong emphasis on activism also helps explain Yudkowsky's critique of Kurzweil.
In short, I understand, somewhat better, the Institute's intentions now. But I still think there is a curious tension between the Institute's pessimism and enthusiasm/activism.
Thanks for our discussion, Kip. I look forward to interacting more.
Best,
Tyler