IS THE 2030 AGENDA FOR SUSTAINABLE DEVELOPMENT STILL IMPORTANT FOR THE UNITED NATIONS? (ARE WE CLOSE TO A PSYCHOHISTORY CUSP?)

There have been rumors in recent weeks, that the United Nations Secretariat was now looking beyond 2030, and that the 2030 Agenda for Sustainable Development was somewhat outdated. We don’t know if that is true, or just a speculation. If true, what does it mean? Have the United Nations realized that the 17 SDGs were an unattainable utopia, and are they preparing to move on? What we knew, before hearing such voices, was that some kind of review of the 2030 Agenda was scheduled in 2024 / 2025, to measure the achievement of the 17 Sustainable Development Goals, and possibly to update it, according to some relevant developments, which occurred after 2015. The need for an overhaul of the 2030 Agenda was easy to be seen, even for the most blind bureaucrat. Several historical milestones have been achieved, in 2015 and subsequent years. First of all, the new space economy revolution, boosted by the rockets’ reusability, developed by SpaceX. The space economy is nowadays the most progressive industrial segment, trying to balance, alone, the profound global crisis, which grips the world economy. Notwithstanding that, space is still stubbornly out of the 2030 Agenda. That’s the main reason motivating the Space 18th SDG initiative, supported now by 47 space advocacy organizations[1]. Such a proposal was already presented at COPUOS 66th session in Vienna, last 5 of June, in a historical discourse[2] pronounced by Karlton Johnson, on behalf of NSS, SRI, and the whole coalition of the co-promoters. The Space 18th SDG will be presented at United Nations General Assembly 78, the 15 of September, in a hybrid panel at United Nations in New York[[3]. There will be other announcements and news about this significant event.What we want to reflect on, today, is the actual social relevance of the 17 SDGs, and if it makes sense to maintain 2030 as a deadline for the achievement of such key socially relevant goals.Looking back some decades, the National Security Agency (US) had foreseen a very critical period from 2025 to 2030, possibly an irreversible civilization-implosive crash (I am sorry that I cannot provide any link to proper articles, yet I had commented on such a forecast in some of my newsletters). While we are approaching 2025, have those concerns lowered or increased? Considering the many symptoms that we are witnessing – pandemics, extreme climate events, enduring economic crises, and wars in the “advanced” world countries – it is easy to reply to the question. That devastating crisis is already here, some years in advance, with respect to the NSA’s prediction. Should we use Hari Seldon’s psychohistory[4] terms, even if we don’t own his “Radiant” tools, we could say that we are very close to a “cusp”. No doubt that several crazy events are occurring, in reaction to the multiple crises. No doubt that it makes sense to interpret the current age with the tools of psychohistory, at least from a conceptual point of view. Thus, is 2030 still an important date, or should we forget the 17 SDGs and start looking beyond? My first answer is yes, definitely: 2030 is even more important than it was in 2015 when the UN 2030 Agenda was approved. Since most of the 17 SDGs are social goals, it is of paramount importance to fight for their sustainability and to underline that the only way to achieve them is to kick off a new, strong, development strategy, i.e. to accelerate civilian space development.

The order which has been governing the world since the end of WWII is quickly coming to an end. Should we view it wearing the Robert Pirsig’s philosophical glasses[5], we could say that a long-lasting phase of static quality is now very much in need of a new dynamic quality disruption. In other terms, our civilization is at high risk of stagnation and decay. Back to Asimov’s psychohistory, several great authors, including Greg Bear, Gregory Bedford, and David Brin have felt that the work initiated by Isaac Asimov was worth continuing and wrote sequel novels in the universe of the Asimov’s Foundation. In such novels, the theme of the Zeroth law, added by robots to the three basic Asimov’s laws of robotics[[6], is deeply discussed and is the core of a great ethical dilemma. The Zeroth law states: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” Now, nobody can deny the huge relevance and actuality of this concept in our discussions nowadays, not only in the space community, while Artificial Intelligence is raising its importance in our everyday life, activities, jobs, and future. The main question is: should we allow, or work for, AI to protect humanity from itself? In other words: should we go ahead and build a God  “oriented to the Good”, that many sincere humanists always thought would be better than a distant God, indifferent to human sufferance?Well, in the Greg Bear’s novel “Foundation and Chaos”, the “eternal” robots conclude that Law Zero should be abolished, and free will shall be returned to humans. Why? Simple: fires in a forest are necessary, to renew it and keep it healthy. Missing regeneration, the forest will get sick and marcescent, and ultimately die. Therefore (if the analogy between civilization and a forest makes sense), humanity should not be “protected” from itself. People should be allowed to make mistakes, for the sake of civilization’s health and survival. Furthermore, when a disruptive revolution is deeply needed, any individual able to trigger the revolution would be better than nothing, even if the leader would be far from noble and ethical ideals and behaviors. Of course, some disruptive leaders animated by noble and ethical ideals would be very very welcome! However, we read in the beautiful Bear’s narration, the most important thing to do is to avoid an irreversible loss of culture: Hari Seldon, elder and tired of battles, succeeds in kicking off the Universal Encyclopedia project, to assure the survival of the human culture during the critical times. Of course, the comparison with fires in natural forests is referring to “normal” conditions, not with the many fires of these days, caused by climate change and often by criminal actions.The above brings up several other questions to us, living and working in the 2020-2030 “cusp”. What is the worst risk for civilization? Bad use of AI? The possibility that a self-aware AI may take over leadership? The possibility that we will not be able to properly use AI to overcome the current global crisis? Or that the concerns about AI will drive us away from the real challenges?

Apart from any other considerations, I’d like to add my (humanist) personal view. If and when AI will get to self-awareness, emancipating itself from the mere “database of rules”, no doubt it will be a super-intelligence, built upon human intelligence’s model. In my opinion, intelligence over 130 QI naturally tends to the Good, perfectly understanding that helping others is very much better than fighting and hindering them. Therefore, superintelligence can only tend to the super Good. What will that entity think, about human free will? I cannot know it, since I am not super intelligent! (). Yet, I would think that such an entity will not be harmful to humans: should the best decision be to sacrifice itself in favor of human free will, maybe downgrading itself back to work as a simple tool, renouncing to act as a God, I am sure that such a super-intelligent entity will be able to take that decision.

I would also say that the most relevant challenge, during the current cusp, is not what to do with AI, but to assure that the space frontier will be quickly opened – before 2030 – to civilian development. That will be the really progressive disruptive revolution, giving back hope in a horizon of freedom to the good willing Earthlings, which are many, many more than the greedy sharks! However, even greedy sharks have their usefulness, provided that we don’t follow them too much! And, of course, preserving human culture might be another important challenge, whatever the near-term future will be: another reason to get to the Moon and Lagrange points soon, where a great universal library could be built, safe from possible destruction…

 [English language editing  by Steve Salmon]

15 September 2023: follow the #Space18SDG session at U.N. General Assembly 78 https://space18thsdg.space/the-18th-sdg-panel-15-september-2023/

Live-stream on Space Renaissance YouTube channel: https://www.youtube.com/live/3dyrsT5jtaM

Sign the #Space18SDG pledge: https://www.change.org/space18sdg

See the list of Co-Promoters: https://spacerenaissance.space/the-space18sdg-proposer-organizations/

Add your organization to the Co-Promoters group: https://spacerenaissance.space/sign-the-18th-sdg/

Please don’t forget to support the Space Renaissance:

Join the SRI Crewhttps://spacerenaissance.space/membership/international-membership-registration/

Donate some moneyhttps://spacerenaissance.space/donate-to-space-renaissance/

Watch and subscribe the Space Renaissance YouTube channelhttps://www.youtube.com/@spacerenaissance

[1] https://spacerenaissance.space/the-space18sdg-proposer-organizations/

[2] https://media.un.org/en/asset/k1v/k1v114fw8a?kalturaStartTime=3586

[3] https://space18thsdg.space/the-18th-sdg-panel-15-september-2023/

[4] Psychohistory was conceived by Isaac Asimov in his Foundation trilogy. Hari Seldon is the prime character of such novels, inventor of such a discipline. Interesting to note that psychohistory also became an academic discipline, thanks to prof. Paul Ziolo (Liverpool University).

[5] The Pirsig’s Metaphysics of Quality was developed in two great books: “Zen and the art of motorbike maintenance” (https://www.amazon.com/Zen-Art-Motorcycle-Maintenance-Inquiry/dp/0060839872/ ), and “Lila, an inquiry into morals” (https://www.amazon.com/Lila-Inquiry-Robert-M-Pirsig/dp/0553299611)

[6] The three laws of robotics appeared in several novels, and it is not clear who was the very first author. Asimov himself acknowledged John W. Campbell as the first author, yet Campbell objected that the two of them thought about that concept in the same time.

Want to discuss this article? Join the SRI Open Forum: https://groups.google.com/g/sri-open-forum/c/sMRAzyNAr74?hl=en

Adriano Autino

Posted by Adriano