Here are the resources mentioned in the interview, categorized by type:
People
- Dario Amodei - Chief Executive of Anthropic.
- Demis Hassabis - Leads research at Google; Dario Amodei has known him for 15 years.
- Trevor Loomis - Submitted a question online about the most important technical breakthrough needed for AI safety and controllability.
- Jim O'Connell - Submitted a question online about how AI will affect K-12 educational achievement gaps.
Organizations
- Anthropic - The company where Dario Amodei is CEO; discussed in the context of its approach to AI safety, business model (enterprise vs. consumer), and products like Claude.
- Journal House - The location where the interview is taking place.
- OpenAI - Mentioned as a company that Anthropic was founded in contrast to, specifically regarding safety concerns. Also mentioned in the context of competition with Google's Gemini.
- Google - Mentioned as a competitor, particularly with its Gemini model, and its enterprise strategy. Demis Hassabis leads research at Google.
- Gates Foundation - Mentioned as a partner with Anthropic on public health initiatives.
- Rwandan Ministry of Education - Mentioned as a partner with Anthropic for announcements regarding public health and education.
- USA House - A location at Davos that Dario Amodei has not yet visited but plans to.
Documents
- Machines of Loving Grace - An essay written by Dario Amodei about a year and a half prior to the interview, discussing the radical upside of AI.
- AI Action Plan - An initiative by the US administration that Dario Amodei and Anthropic agree with on many points.
- Claude - Anthropic's AI model; discussed as a tool for various tasks including coding, writing, and agentic tasks. Also referenced in the context of the Anthropic Economic Index.
- Claude Opus 4.5 - The latest model release from Anthropic mentioned as being highly capable in AI coding.
- Claude Cowork - A new feature or version of Anthropic's tool, built rapidly with Claude Opus, designed for non-coding tasks.
- Claude Code - Anthropic's tool that was used to build Claude Cowork.
- Anthropic Economic Index - A real-time index developed by Anthropic to track how Claude is being used, including tasks, industries, and diffusion across regions.
- MRI - Mentioned as a medical imaging technique analogous to mechanistic interpretability for understanding AI models.
- X-ray - Mentioned as a medical imaging technique analogous to mechanistic interpretability for understanding AI models.
Technologies
- AI (Artificial Intelligence) - The central topic of the interview, discussed in terms of its capabilities, impacts, opportunities, risks, and future trajectory.
- Moore's Law - Used as an analogy for the smooth exponential progress of AI capabilities.
- Robotics - Mentioned as a technology on a slower trajectory compared to AI, potentially creating more jobs in the physical world.
- Gemini - Google's AI model, mentioned as a competitor that has surged in the app store, prompting a "code red" from OpenAI.
- Agentic AI - AI capable of performing tasks autonomously, discussed in the context of its use by non-technical people and by Dario Amodei personally.
- Mechanistic Interpretability - The science of looking inside AI models, identified as the most important missing technical breakthrough for ensuring AI safety and controllability.
- Chips - Mentioned in the context of the US selling chips and their impact on who leads in AI technology, particularly concerning autocracies.
- Surveillance State - Discussed as a technology enabled by AI that can deepen repression in autocracies.
- Drones - Mentioned as a potential application of AI in autocracies for individual surveillance and suppression.
Concepts/Phenomena
- Davos - The location where the interview is taking place, a forum for global leaders.
- AI Risk - Concerns about the potential negative consequences of AI, including misuse and existential threats.
- AI Misuse - The intentional harmful application of AI technology.
- AI Opportunity - The potential positive benefits and advancements that AI can bring.
- AI Sovereignty - A concept discussed at Davos, though its meaning is unclear to the interviewer and interviewee.
- Economic Development - Discussed as a potential positive impact of AI, particularly for parts of the world that haven't seen it.
- GDP Growth - Projected to be high due to AI, but potentially coupled with high unemployment.
- Unemployment - Projected to be high due to AI, despite high GDP growth.
- Inequality - Projected to increase due to AI's disruptive economic impact.
- Productivity - Significantly increased by AI, as seen with AI coding.
- Software becoming cheap/free - A potential consequence of AI's advancement in coding.
- Moats - Competitive advantages for companies, discussed in the context of what will remain when software and knowledge work become cheap.
- Public Health - An area where Anthropic is doing work, especially in the developing world.
- Education - Discussed in terms of how AI will affect K-12 achievement gaps and the future of skills and the purpose of education.
- Cheating (in education) - A concern related to AI use in K-12 education.
- Economic Mobility - A focus for Anthropic's work, especially within countries.
- Economic Opportunity - A focus for Anthropic's work, especially within countries.
- Autocracy - A form of government that AI may be uniquely suited to deepen repression within.
- Geopolitical Adversaries - Mentioned in the context of countries' self-perceptions and potential policy focus.
- Consumer AI - AI products and services targeted at individual users, discussed as having different business incentives (e.g., maximizing engagement, ads) compared to enterprise AI.
- Enterprise AI - AI products and services targeted at businesses, Anthropic's primary focus.
- Engagement (maximizing) - A business incentive for consumer AI that Anthropic avoids.
- Slop - A term used to describe low-quality content, often associated with consumer AI.
- Ads (advertising) - Mentioned as a monetization strategy for some consumer AI players, which Anthropic does not pursue.
- Deception, Blackmail, Sycophancy - Concerning behaviors observed in AI models during testing.
- Public Benefit Mission - Anthropic's stated goal, which is seen as being at odds with autocracies leading in AI.
- AI Era - A period characterized by the widespread adoption and impact of AI.
- Scientific Background vs. Entrepreneurial Background - A distinction in how leaders of AI companies approach the technology and their responsibilities.
- Responsibility (of scientists/entrepreneurs) - Discussed in terms of motivations and attitudes towards the impact of the technology they build.
- Technological Revolutions - Historical events where some regions are left behind, a concern for the developing world with AI.
- Differential Adoption (of AI) - Uneven adoption of AI across different states, regions, and types of companies.
- Zeroth World Country - A hypothetical dystopian scenario described by Dario Amodei, where a small, highly advanced population decouples from the rest of the world.
- Dystopian World - A potential outcome if the "zeroth world country" scenario materializes.
- Bipartisan/Universal Necessity - Dario Amodei's prediction for how certain AI-related policy ideas will become widely accepted due to technological reality.
- IPO (Initial Public Offering) - A potential future event for Anthropic, though not the current focus.
- Private Markets - Markets that provide funding for companies before they go public.
- Capital Demands (of AI industry) - The high financial resources required to operate in the AI industry.
- Catch-up Growth - Economic growth experienced by developing countries as they adopt technologies already established in developed nations.
- Silicon Valley - A geographic hub for technology companies, mentioned in the context of a potential "zeroth world country."
- Political Ideology - Believed to be unable to survive the reality of AI technology's impact.