Posts tagged AI Risk
Winning the Climate War with Kristian Rönn

Work to remove carbon from the atmosphere, transform the global economy to renewable sources of energy, repair broken ecological systems, and create safe havens for climate refugees is being done by countless, innovative people around the planet. One of these people is Kristian Rönn. With a background in mathematics, philosophy, computer science and artificial intelligence, Kristian and his team are helping organizations quantify their carbon footprint through a practice called carbon accounting. It’s a practice that is in its nascent stages, but will very likely become standard operating procedure for most companies around the world in the future.

In this interview, Kristian talks about his previous work studying global catastrophic risks - like like nuclear war, runaway artificial intelligence, and climate change - at Oxford’s Future of Humanity Institute. He goes on to talk about the work Normative - the company that he co-founded 10 years ago and where he currently serves as CEO - is doing to make carbon visible and how that fits into winning the fight against a warming planet. He finishes the interview by discussing how society can shift key measurements away from GDP to things like well-being and happiness and Kristian gives advice for business and government leaders wanting to use this conversation to make their organizations stronger.

Kristian Rönn is the CEO and co-founder of Normative. He is a thought leader within carbon accounting, with speaking engagements at COP and Davos, as well as appearances in media outlets like Bloomberg and Sky News. He has advised governments and international bodies, and has been officially acknowledged for his contribution to UN Goal 13 by UNDP. Before he started Normative he worked at the University of Oxford’s Future of Humanity Institute on issues related to global catastrophic risks, including climate change. In 2023, he was named one of Google.org’s “Leaders to Watch.”

Read More
Making Artificial Intelligence Safe with Charlotte Siegmann

Artificial Intelligence is embedded in our everyday lives right now and it will have a rapidly growing influence over the future of humanity for generations to come. Whether that influence will result in abundance for most humans or just a few winners and many losers is largely dependent on the decisions we make right now. Charlotte Siegmann is one of the people who is working to ensure governments, companies, and individuals make the right choices. Her work is focused on how to make the development and deployment of advanced AI systems safer and more beneficial.

In this interview, Charlotte talks about the true dangers of AI, how it can benefit humanity, ideas for how AI should be regulated, and how the decisions we make today have the potential to affect many generations to come. She gives advice for business leaders interested in harnessing the power of AI for their organizations, she talks about the competencies employees will need to develop to thrive in an AI world, and she discusses how the taxation of AI and robots could fund social programs and be a source for universal basic income.

A PhD student in economics at the Massachusetts Institute of Technology (MIT), Charlotte Siegmann is one of the incredibly bright, thoughtful people working to keep Artificial Intelligence safe and beneficial for all of humanity. She is a founding member of The Center for AI Risks & Impacts (KIRA). At MIT, she is working on the economics of AI governance, the intersection of mechanism design, game theory, and AI safety. She has worked as a Predoctoral Research Fellow in Economics at Oxford’s Global Priorities Institute, as a Research Assistant for a professor at Stanford University, and as an intern in the European Union Parliament.

Read More