AI / ML
Amongst foundation model developers, Anthropic has positioned itself as a company with a particular focus on AI safety and describes itself as building “AI research and products that put safety at the frontier.” Founded by engineers who quit OpenAI due to tension over ethical and safety concerns, Anthropic has developed its own method to train and deploy “Constitutional AI”, or large language models (LLMs) with embedded values that can be controlled by humans. Since its founding, its goal has been to deploy “large-scale AI systems that are steerable, interpretable, and robust”, and it has continued to push towards a future powered by responsible AI.
The Anthropic Economic Index
Navigating a world in transition: Dario Amodei in conversation with Zanny Minton Beddoes
Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity
Machines of Loving Grace
Google's multi-billion dollar relationship with Anthropic is under investigation
Claude 3.5 Deep Dive: This new AI destroys GPT
Microsoft Promises a 'Whale' for GPT-5, Anthropic Delves Inside a Model’s Mind and Altman Stumbles
Hallucination Experiment - Arthur AI
Dario Amodei (Anthropic CEO) - $10 Billion Models, OpenAI, Scaling, & AGI in 2 years with Dwarkesh Patel
Dario Amodei, C.E.O of Anthropic, on the Paradoxes of A.I. Safety and Netflix’s ‘Deep Fake Love’ (New York Times)
Anthropic releases Claude 2, its second-gen AI chatbot
Claude 2.0, Anthropic's Latest ChatGPT Rival, Is Here — And This Time, You Can Try It
Inside the White-Hot Center of A.I. Doomerism
The economic potential of generative AI: The next productivity frontier - McKinsey