Posted by: Gray | March 29, 2023

Endorsing call for Pause Giant AI Experiments

This call is a positive timely step initiated by folks at The Future of Life Institute. I endorse it and invite you to as well. It has been already been endorsed by well over a thousand folks including key researchers and entrepreneurs like Stuart Russell, Max Tegmark, Anthony Aguirre, Jann Tallinn, Steve Wozniak, and Valerie Pisano. The call is copied below and can be signed online here: https://futureoflife.org/open-letter/pause-giant-ai-experiments/

Pause Giant AI Experiments: An Open Letter

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

Add your signature

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI PrinciplesAdvanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5]  We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.

The website for signing on is online here:

Pause Giant AI Experiments: An Open Letter

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI PrinciplesAdvanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5]  We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.

Posted by: Gray | March 29, 2023

Can Congress Encourage Wiser AI?

Here is a column that appeared in the Portland Press Herald on 3/28/23:

Posted by: Gray | March 29, 2023

Who Will Call the AI Piper’s Tune?

This is an opinion column recently published in the Mount Desert Islander newspaper:

Posted by: Gray | March 9, 2023

Songs for a Wiser Earth

These are songs to accompany the reading of Smarter Planet vs. Wiser Earth? How Dialogue Can Transform Artificial Intelligence into Collaborative Wisdom, Quaker Institute for the Future, 2023

“I’m Gonna Slow Right Down”

Resources for studying, researching and applying ways to approach

Artificial Intelligence (AI) as Collaborative Wisdom (CW)

Gray Cox, gray@coa.edu, #1-207-460-1163

College of the Atlantic, 105 Eden Street, Bar Harbor, Maine 04609 USA

            These are resources provided to supplement a keynote talk I gave at the NIC.br Survey Methodology Workshop (August 29, 2022) in São Paulo, Brazil, on “AI and the Political Philosophy of the Future”.

For an overview:

A recording of the talk is available here: https://www.youtube.com/watch?v=tdISk_1iDeo

            The text of my talk is available here.

  Here is a pdf of the slides.

         

A manuscript of a forthcoming book that develops those and related ideas at length is available here as Smarter Planet vs. Wiser Earth? Artificial Intelligence and Collaborative Wisdom. (Please note that responses, comments and suggestions for this would be especially welcome.)

Methods of Dialogical Reasoning

For a concise, practical manual on basic methods of negotiation in the shared problem solving paradigm as practiced in North America and United Nations contexts, a very useful introduction is this classic handbook:

            Fisher, Roger, William L. Ury, and Bruce Patton. Getting to Yes: Negotiating Agreement Without Giving In. 3rd Revised ed. edition. New York: Penguin Publishing Group, 2011.

For a very interesting and useful account of methods for researching and applying conflict resolution approaches from diverse traditions in cross cultural contexts, a good place to start is:

            Lederach, John. Preparing For Peace: Conflict Transformation Across Cultures. Syracuse, N.Y: Syracuse University Press, 1996.

For overviews of the field and the paradigms employed in it along with case studies of key examples, see:

                    Cox, Gray. The Ways of Peace: A Philosophy of Peace As Action. New York: Paulist Pr, 1986. (Available as a pdf at: https://breathonthewater.files.wordpress.com/2015/12/00fullversionwaysofpeaceword.pdf)

            Ramsbotham, Oliver, Tom Woodhouse, and Hugh Miall. Contemporary Conflict Resolution. 4th edition. Cambridge ; Malden, MA: Polity, 2016.

For examples from diverse cultural traditions see:

                    Bondurant, Joan Valerie. Conquest of Violence: The Gandhian Philosophy of Conflict. Revised edition. Princeton, N.J: Princeton University Press, 1988. (NOTE: This is an especially helpful, systematic introduction to Gandhi’s thought and practice and the theoretical and practical features of the traditions of conflict transformation which he played a key role in developing.)

                    Chenoweth, Erica, and Maria Stephan. Why Civil Resistance Works: The Strategic Logic of Nonviolent Conflict. Reprint edition. New York Chichester, West Sussex: Columbia University Press, 2012. (NOTE: Includes systematic statistical analysis of the effectiveness of nonviolent methods of social change.)

            Chew, Pat K., ed. The Conflict and Culture Reader. New York: NYU Press, 2001.

                    Cox, Gray, Charles Blanchard, Geoff Garver, Keith Helmuth, Leonard Joy, Judy Lumb, and Sara Wolcott. A Quaker Approach to Research: Collaborative Practice and Communal Discernment. Producciones de la Hamaca, 2014. Available as a pdf at: https://quakerinstitute.org/wp-content/uploads/2021/06/QAR-QIF-web.pdf)

                    Gilligan, Carol. In a Different Voice: Psychological Theory and Women’s Development. Reprint edition. Cambridge, Mass.: Harvard University Press, 2016.

                    Nan, Susan Allen, Zachariah Cherian Mampilly, and Andrea Bartoli, eds. Peacemaking. Santa Barbara, Calif: Praeger, 2011

                    Ostrom, Elinor. Governing the Commons. Reissue edition. Cambridge, United Kingdom: Cambridge University Press, 2015.

                    Richards, Howard. The Evaluation of Cultural Action: An Evaluative Study of the Parents and Children Program. Palgrave MacMillan, 2017. (NOTE: This is an especially engaging case study coupled with especially clear and nuanced philosophical analysis that looks at ways community based, critical participatory research in the tradition of Paulo Freire can be used to engage in collaborative, dialogical reasoning and social transformation.)

                    Ruddick, Sara. Maternal Thinking: Toward a Politics of Peace. Boston: Beacon Press, 1995.

                    Sheeran, Michael J. Beyond Majority Rule: Voteless Decisions in the Religious Society of Friends. New edition. Philadelphia, Pa.: Philadelphia Yearly Meeting of Religious Society of Friends, 1983.

Simard, Suzanne. Finding the Mother Tree: Discovering the Wisdom of the Forest. New York: Vintage, 2022. (NOTE: This book provides an interesting entry into ways of understanding current forestry science understanding of forms of communication and intelligence  — in the sense used in this talk – in forests in western Canada and begins to suggest ways that natural intelligence might be usefully incorporated in Human/AI/Nature systems. Another relevant source for this is in Robin Wall Kimmerer’s Braiding Sweetgrass: Indigenous Wisdom, Scientific Knowledge and the Teachings of Plants, Milkweed Editions, 2015.)

                    Straus, David, and Thomas C. Layton. How to Make Collaboration Work: Powerful Ways to Build Consensus, Solve Problems, and Make Decisions. San Francisco: Berrett-Koehler Publishers, 2002.

Ethics and Computers

For an introduction to the contrasts between monological inference and dialogical reasoning and their implications for programming practices and moral issues in AI, see:

                    Cox, John Gray. “Reframing Ethical Theory, Pedagogy, and Legislation to Bias Open Source AGI Towards Friendliness and Wisdom.” Journal of Evolution and Technology 25, no. 2 (November 2015): 39–54. (available at: https://jetpress.org/v25.2/cox.htm)

                    Cox, Gray, “Decisions Dialogue: The Bear”. This provides a very simplistic example of   some ways to begin to incorporate some key features of dialogical reasoning into programs is provided by a block code program written for working on these issues with children. It is available here: https://scratch.mit.edu/projects/428374274 It consists of a very basic ethics dilemma game for practicing monological reasoning for moral choice but then trying to incorporate dialogical reasoning elements through multiple iterations of revising the code.

                    Turing, A. M. “Computing Machinery and Intelligence.” Mind LIX, no. 236 (October 1, 1950): 433–60. https://doi.org/10.1093/mind/LIX.236.433.  Available at: https://www.csee.umbc.edu/courses/471/papers/turing.pdf  (NOTE: Turing frames the contrast in terms of  the kinds of reasoning and computer program development appropriate to the “Machine” vs. “Child” model. The nuances and implications of the Child model are developed in the last, generally overlooked section of the paper.)

For studies of the “Friendly AI” problem, ethics in AI, and various forms of the “values alignment problems” see, for instance:  

                    Benjamin, Ruha. Race After Technology: Abolitionist Tools for the New Jim Code. 1st edition. Medford, MA: Polity, 2019.

                    Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Reprint edition. Oxford, United Kingdom ; New York, NY: Oxford University Press, 2016. (NOTE: an especially systematic and probing philosophical analysis.)

                    Christian, Brian. The Alignment Problem: Machine Learning and Human Values. 1st edition. New York, NY: W. W. Norton & Company, 2020.

                    Gunkel, David J. The Machine Question: Critical Perspectives on AI, Robots, and Ethics. Reprint edition. Cambridge, Massachusetts London, England: The MIT Press, 2017.

                    Kurzweil, Ray. The Singularity Is Near: When Humans Transcend Biology. New York: Penguin Books, 2006.

                    Norvig, Peter, and Stuart Russell. Artificial Intelligence: A Modern Approach, Global Edition. 4th edition. Harlow: Pearson, 2021. (NOTE: this fourth edition begins to reframe the problematic of values alignment to include important elements that provide the basis for a dialogical reasoning approach and Russell, in his Human Compatible, explores these issues in important other ways, though he does not draw on explicit articulations of the principles of dialogical reasoning that have been the focus of extended research during the last 50 years as documented in the readings cited above.)

                    Russell, Stuart. Human Compatible: Artificial Intelligence and the Problem of Control. Reprint edition. New York: Penguin Books, 2020.

                    Tegmark, Max. Life 3.0: Being Human in the Age of Artificial Intelligence. Reprint edition. New York: Vintage, 2018. (NOTE: This provides an especially clear and systematic account of the problems and prospects concerning the future of AI from the point of view of the paradigm offered by monological inference and related assumptions such as the substrate independence of information and intelligence. It does not explicitly consider the possibility of alternative forms of intelligence and reasoning such as the dialogical paradigm drawing on negotiation, conflict transformation and peacemaking.)

Ethel the Ethical Consulatant Robot — This is a short document introducing a program written in MIT’s language, SCRATCH, which is designed for work with K-12 students:

A lecture with Q&A on reinventing Higher Education in light of “Slow Zoom” and related insights drawn from the pandemic and in the context of changes in our civilization related to the exponential growth in AI and the need for nonviolent strategies of dialogical reasoning to reform our economics, politics, technology and ethics. This was first presented on October 15th, 2020, as the Human Ecology Forum at the College of the Atlantic. Besides the video of the lecture I have also included, below it, a pdf file of the slides used in the talk.

Posted by: Gray | September 17, 2020

Gandhi’s Dialogical Truth Force

Chapter 15 from Gandhi and the Contemporary World, et. Sanjeev Kumar, Routledge, 2020

https://www.routledge.com/Gandhi-and-the-Contemporary-World/Kumar/p/book/9780367408510

“Gandhi’s Dialogical Truth Force:

Applying Satyagraha Models of  Practical Rational Inquiry to the Crises of Ecology,  Global Governance, and Technology”

The central thesis of this chapter is that Gandhi’s model of rational inquiry provides the key to addressing the existential crises that are being created by the dominant, current models of  economic, political and technological reasoning.  Part one sketches defining features of the current models of reasoning and the problems they have. It argues that: A.) they are monological (and so exclude data and voices that are essential to understanding reality) and B.) they presuppose a value “free” or “neutral” conception of reason (and so are committed to a moral relativism which means bribe, coercion and violence are the only ultimate sanctions to secure agreement in practical affairs).  Part two sketches the principal features of Gandhi’s satyagraha showing it is a dialogical process of  practical rational inquiry which can discover emergent objective moral truth and bear witness to it in ways that are effective in securing rational consent and enforcing rational, moral norms in non-violent ways. As such, it provides ways to solve the problems of the current dominant models. Part three develops some examples of the ways in which satyagraha can and should be applied to the three existential crises focused on in this paper. It offers general sketches of the Gandhian alternatives to our current  “civilized” forms of economic, political and technological rationality. It also offers some specific proposals for initiatives that might be undertaken to develop and institutionalize these in systematic ways at the global level as part of a genuinely civilized global culture of peace. The proposals include resource allocation initiatives that could fund the change, legal strategies that could provide a basis for institutionalizing principals of  moral truth as the foundations for an international system of justice, and legislative strategies for incarnating morality in the artificial intelligence systems and corporations that increasingly dominate our planet.

This is a chapter from the book: Quakers, Politics, and Economics:  Quakers in the Disciplines, Volume 5, Friends Association for Higher Education, 2018. The full title of the article is: “Governing the World from the Ground Up Through Power Grounded in the Light: A Proposal for Action Research on Quaker and Gandhian Responses to our Global Crises”,

Posted by: Gray | April 10, 2019

Earth 2045: The Culture Hack Commons Scenario

I was  especially fortunate to be able to attend the Augmented Intelligence Summit sponsored by the Future of Life Institute and others on March 28-31, 2019, at the 1440 Multiversity in Scotts Valley, California. Here is a bit of info on it: https://futureoflife.org/augmented-intelligence-summit-2019-2/

And here is a scenario draft that I wrote up coming out of it which envisions some aspects of a path to a dramatically better world by 2045 through methods of funding a nonviolent system of world governance from the ground up and transforming our culture of violence through an integration of Gandhian methods and dialogical forms of  enhanced Artificial Intelligence reconceived and developed as Augmented Natural Wisdom: Culture Hack Commons Scenario 2045

Older Posts »

Categories