Alex Tamkin

Email: atamkin_cs_stanford_edu | Research Updates: @alextamkin

I am a research scientist at Anthropic. My research focuses on sociotechnical alignment of AI systems: how to build AI systems that interact positively with people and the societies we live in. Concretely, this looks like:

If this sounds interesting, we're hiring!

Previously, I completed my PhD in Computer Science at Stanford, where I was advised by Noah Goodman and part of the Stanford AI Lab and Stanford NLP Group.

Selected Publications

Collective Constitutional AI: Aligning a Language Model with Public Input [📝blogpost]

Saffron Huang, Divya Siddarth, Liane Lovitt, Thomas I. Liao, Esin Durmus, Alex Tamkin, Deep GanguliFAccT 2024Press: [New York TImes] [Time Magazine] [Business Insider]

Evaluating and Mitigating Discrimination in Language Model Decisions [🐦thread]

Alex Tamkin, Amanda Askell, Liane Lovitt, Esin Durmus, Nicholas Joseph, Shauna Kravec, Karina Nguyen, Jared Kaplan, Deep GanguliArXiv PreprintPress: [VentureBeat] [TechCrunch

Eliciting Human Preferences with Language Models [🐦thread]

Belinda Z. Li*, Alex Tamkin*, Noah D. Goodman, Jacob AndreasArXiv PreprintPress: [VentureBeat]

Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet [📝blogpost]

Adly Templeton*, Tom Conerly*, Jonathan Marcus, Jack Lindsey, Trenton Bricken, Brian Chen, Adam Pearce, Craig Citro, Emmanuel Ameisen, Andy Jones, Hoagy Cunningham, Nicholas L Turner, Callum McDougall, Monte MacDiarmid, Alex Tamkin, Esin Durmus, Tristan Hume, Francesco Mosconi, C. Daniel Freeman, Theodore R. Sumers, Edward Rees, Joshua Batson, Adam Jermyn, Shan Carter, Chris Olah, Tom HenighanPress: [New York Times] [WIRED] [TIME

Codebook Features: Sparse and Discrete Interpretability for Neural Networks [🐦thread][📝blogpost]

Alex Tamkin, Mohammad Taufeeque, Noah D. GoodmanICML 2024

Towards Measuring the Representation of Subjective Global Opinions in Language Models

Esin Durmus, Karina Nguyen, Thomas I. Liao, Nicholas Schiefer, Amanda Askell, Anton Bakhtin, Carol Chen, Zac Hatfield-Dodds, Danny Hernandez, Nicholas Joseph, Liane Lovitt, Sam McCandlish, Orowa Sikder, Alex Tamkin, Janel Thamkul, Jared Kaplan, Jack Clark, Deep GanguliCOLM 2024

Towards Monosemanticity: Decomposing Language Models With Dictionary Learning

Trenton Bricken*, Adly Templeton*, Joshua Batson*, Brian Chen*, Adam Jermyn*, Tom Conerly, Nicholas L Turner, Cem Anil, Carson Denison, Amanda Askell, Robert Lasenby, Yifan Wu, Shauna Kravec, Nicholas Schiefer, Tim Maxwell, Nicholas Joseph, Alex Tamkin, Karina Nguyen, Brayden McLean, Josiah E Burke, Tristan Hume, Shan Carter, Tom Henighan, Chris OlahPreprint

Feature Dropout: Revisiting the Role of Augmentations in Contrastive Learning 

Alex Tamkin, Margalit Glasgow, Xiluo He, Noah GoodmanNeurIPS 2023

Task Ambiguity in Humans and Language Models

Alex Tamkin*, Kunal Handa*, Avash Shrestha, Noah GoodmanICLR 2023

Oolong: Investigating What Makes Crosslingual Transfer Hard with Controlled Studies [🐦thread]

Zhengxuan Wu*, Isabel Papadimitriou*, Alex Tamkin*EMNLP 2023

DABS 2.0: Improved Datasets and Algorithms for Universal Self-Supervision [🐦thread]

Alex Tamkin, Gaurab Banerjee, Mohamed Owda, Vincent Liu, Shashank Rammoorthy, Noah GoodmanNeurIPS 2022

Active Learning Helps Pretrained Models Learn the Intended Task [🐦thread]

Alex Tamkin*, Dat Nguyen*, Salil Deshpande*, Jesse Mu, Noah GoodmanNeurIPS 2022

DABS: A Domain-Agnostic Benchmark for Self-Supervised Learning  [🌐site] [🐦thread]

Alex Tamkin, Vincent Liu, Rongfei Lu, Daniel Fein, Colin Schultz, Noah GoodmanNeurIPS 2021Press: [Redshift Magazine] [AIM Magazine] [Stanford HAI]

C5T5: Controllable Generation of Organic Molecules with Transformers

Daniel Rothchild, Alex Tamkin, Julie Yu, Ujval Misra, Joseph GonzalezArXiv Preprint

On the Opportunities and Risks of Foundation Models

Center for Research on Foundation Models (full list of authors)– Section 4.2: Training and Self-Supervision, Alex Tamkin– Section 4.9: AI Safety and Alignment, Alex Tamkin, Geoff Keeling, Jack Ryan, Sydney von ArxCoauthor: Sections §2.2: Vision, §3.3: Education, §4.1 Modeling, §5.6: Ethics of ScalePress: [Forbes] [The Economist] [VentureBeat]

Viewmaker Networks: Learning Views for Unsupervised Representation Learning   [📝blogpost] [🐦thread]

Alex Tamkin, Mike Wu, Noah GoodmanICLR 2021 

Understanding the Capabilities, Limitations, and Societal Impact of Large Language Models [📝blogpost]

Alex Tamkin*, Miles Brundage*, Jack Clark, Deep GanguliArXiv Preprint Press: [WIRED] [VentureBeat] [Datanami] [Slator]

Language Through a Prism: A Spectral Approach for Multiscale Language Representations   [🐦thread] [📝blogpost]

Alex Tamkin, Dan Jurafsky, Noah GoodmanNeurIPS 2020 

Investigating Transferability in Pretrained Language Models  [🐦thread]

Alex Tamkin, Trisha Singh, Davide Giovanardi, Noah GoodmanFindings of EMNLP 2020; Presented at CoNLL 2020 

Distributionally-Aware Exploration for CVaR Bandits. 

Alex Tamkin, Ramtin Keramati, Christoph Dann, Emma Brunskill. NeurIPS 2019 Workshop on Safety and Robustness in Decision Making; RLDM 2019 

Media

Anthropic's YouTube Channel - How we built Artifacts with Claude

The Pragmatic Engineer - How Anthropic built Artifacts

Quanta Magazine - How Quickly Do Large Language Models Learn Unexpected Skills?

VentureBeat - Anthropic leads charge against AI bias and discrimination with new research

TechCrunch - Anthropic’s latest tactic to stop racist AI: Asking it ‘really really really really’ nicely

VentureBeat - How can AI better understand humans? Simple: by asking us questions

WIRED Magazine - Chatbots Got Big—and Their Ethical Red Flags Got Bigger

Abrupt Future Podcast - Alex Tamkin on ChatGPT and Beyond: Navigating the New Era of Generative AI

AI Artwork in PC Magazine (twitter thread: DALL-E Meets WALL-E: an Art History)

The Gradient Podcast - Alex Tamkin on Self-Supervised Learning and Large Language Models

Press: [Communications of the ACM]

Personal

Other topics I think a lot about:

I also like making art, especially ceramics and photography!