{"id":"jason-wei","title":"Jason Wei","content":"**Jason Wei** is an American artificial intelligence researcher known for his contributions to large language models, including the development of chain-of-thought prompting. He is currently a researcher at [Meta Superintelligence Labs](https://iq.wiki/wiki/meta-superintelligence-team), having previously worked at OpenAI and Google Brain. [\\[1\\]](#cite-id-v6XAVRDAcL) [\\[2\\]](#cite-id-2tDUSfBzCb)\n\n$$widget0 [YOUTUBE@VID](https://youtube.com/watch?v=3gb-ZkVRemQ)$$\n\n## Career\n\nWei began his career as a research scientist at Google Brain, where his work focused on improving the capabilities of large language models (LLMs). During his time at Google, he was instrumental in research that popularized several key techniques in the field. His work on chain-of-thought prompting demonstrated that LLMs could perform complex reasoning tasks more effectively by generating intermediate steps, mimicking a human-like thought process. He also contributed to the development of instruction tuning, a method for fine-tuning language models to better follow user instructions, and conducted research on the emergent abilities of LLMs, which are unexpected capabilities that appear as models increase in scale. [\\[1\\]](#cite-id-v6XAVRDAcL) [\\[3\\]](#cite-id-oiIEkmwydf)\n\nIn February 2023, Wei announced that he had left Google to join the ChatGPT team at OpenAI. [\\[4\\]](#cite-id-kXNFwFrcHp) At OpenAI, he continued his work on advanced AI models and was a co-creator of the OpenAI o1 model, a series of models designed to spend more time \"thinking\" before providing an answer to handle more complex problems. He also worked on a project known as \"deep research.\" [\\[1\\]](#cite-id-v6XAVRDAcL) [\\[2\\]](#cite-id-2tDUSfBzCb) While at OpenAI, Wei became a strong proponent of reinforcement learning (RL), a training method that uses feedback to refine a model's performance. He described himself as an \"RL diehard\" and noted how its core concepts influenced his personal philosophy, stating that \"beating the teacher requires walking your own path and taking risks and rewards from the environment.\" [\\[3\\]](#cite-id-oiIEkmwydf) [\\[5\\]](#cite-id-6H1F6rdOR7)\n\nIn July 2025, it was reported that Wei, along with his close colleague [Hyung Won Chung](https://iq.wiki/wiki/hyung-won-chung), was leaving OpenAI to join Meta's newly formed [Superintelligence Labs](https://iq.wiki/wiki/meta-superintelligence-team). Wei's expertise in reasoning and reinforcement learning was seen as a key asset for Meta's goals. [\\[1\\]](#cite-id-v6XAVRDAcL) [\\[2\\]](#cite-id-2tDUSfBzCb) [\\[6\\]](#cite-id-k2bHzR7t8m)\n\n## Major Works\n\nWei has contributed to several influential research areas and models within the field of artificial intelligence. His work has primarily focused on enhancing the reasoning and generalization capabilities of large language models.\n\n### Chain-of-Thought Prompting\n\nWhile at Google, Wei was a key figure in the research that led to chain-of-thought (CoT) prompting. This technique prompts a language model to break down a multi-step problem into intermediate reasoning steps before giving a final answer. By externalizing its \"thought process,\" the model can arrive at more accurate and reliable solutions for complex tasks in areas like arithmetic, commonsense, and symbolic reasoning. This work, published in a 2022 paper titled \"Chain-of-Thought Prompting Elicits Reasoning in Language Models,\" helped popularize the method and significantly influenced subsequent research in LLM reasoning. [\\[1\\]](#cite-id-v6XAVRDAcL) [\\[3\\]](#cite-id-oiIEkmwydf)\n\n### Instruction Tuning and Emergent Abilities\n\nWei also contributed to research on instruction tuning, particularly with the FLAN (Finetuned Language Net) models at Google. Instruction tuning involves fine-tuning a pre-trained language model on a collection of datasets formatted as instructions. This process improves the model's ability to perform a wide variety of unseen tasks in a zero-shot setting, making it more general-purpose and aligned with user intent. His work is detailed in papers such as \"Finetuned Language Models are Zero-Shot Learners\" (2021) and \"Scaling Instruction-Finetuned Language Models\" (2022). Concurrently, he co-authored the paper \"Emergent Abilities of Large Language Models\" (2022), which characterized how certain capabilities of LLMs appear unpredictably at specific scales of model size and training data, rather than improving smoothly. [\\[1\\]](#cite-id-v6XAVRDAcL)\n\n### OpenAI o1 Model\n\nAt OpenAI, Wei was a co-creator of the o1 model, which was introduced in a preview release in September 2024. The o1 series of models was designed to improve upon the reasoning limitations of previous models by dedicating more computational effort, or \"thinking time,\" before generating a response. This approach allows the model to reason through complex tasks in fields like science, mathematics, and coding with greater accuracy. Wei explained that the model was a significant update to the field of AI, moving beyond simple chain-of-thought prompting to a more sophisticated reasoning process. [\\[7\\]](#cite-id-mYLUXM43mk) [\\[2\\]](#cite-id-2tDUSfBzCb)","summary":"AI researcher Jason Wei of Google, OpenAI, and Meta is known for popularizing key LLM techniques like chain-of-thought prompting and instruction tuning.","images":[{"id":"QmRhNjxZTZ2gskbyguQizXxuByHNPYhz6mgVzyiLJbTVZF","type":"image/jpeg, image/png"}],"categories":[{"id":"people","title":"people"}],"tags":[{"id":"AI"},{"id":"Developers"}],"media":[{"id":"QmRhNjxZTZ2gskbyguQizXxuByHNPYhz6mgVzyiLJbTVZF","type":"GALLERY","source":"IPFS_IMG"},{"id":"QmZCizdVdMAHEw4RXBZWDh7XDkR9EBVRHCt1B2YpM2JUz2","type":"GALLERY","source":"IPFS_IMG"},{"id":"QmUuMtMyQvi9yrRUV1cH6hFggAcVEQFteJgzL5xpz2FT8p","type":"GALLERY","source":"IPFS_IMG"},{"id":"https://www.youtube.com/watch?v=3gb-ZkVRemQ","name":"3gb-ZkVRemQ","caption":"","thumbnail":"https://www.youtube.com/watch?v=3gb-ZkVRemQ","source":"YOUTUBE"}],"metadata":[{"id":"references","value":"[\n  {\n    \"id\": \"v6XAVRDAcL\",\n    \"url\": \"https://www.jasonwei.net/\",\n    \"description\": \"Jason Wei's personal website\",\n    \"timestamp\": 1755280729802\n  },\n  {\n    \"id\": \"2tDUSfBzCb\",\n    \"url\": \"https://www.wired.com/story/jason-wei-open-ai-meta/\",\n    \"description\": \"WIRED article on Jason Wei's move to Meta\",\n    \"timestamp\": 1755280729802\n  },\n  {\n    \"id\": \"oiIEkmwydf\",\n    \"url\": \"https://www.techtimes.com/articles/311364/20250716/meta-hires-jason-wei-hyung-won-chung-openai-boost-superintelligence-research.htm\",\n    \"description\": \"TechTimes report on Meta hiring Jason Wei\",\n    \"timestamp\": 1755280729802\n  },\n  {\n    \"id\": \"kXNFwFrcHp\",\n    \"url\": \"https://x.com/\\\\_jasonwei/status/1625575747401441280\",\n    \"description\": \"Jason Wei's announcement of joining OpenAI\",\n    \"timestamp\": 1755280729802\n  },\n  {\n    \"id\": \"6H1F6rdOR7\",\n    \"url\": \"https://x.com/\\\\_jasonwei/status/1945294042138599722\",\n    \"description\": \"Jason Wei's social media post on reinforcement learning\",\n    \"timestamp\": 1755280729802\n  },\n  {\n    \"id\": \"k2bHzR7t8m\",\n    \"url\": \"https://www.webpronews.com/meta-poaches-openai-researchers-jason-wei-and-hyung-won-chung/\",\n    \"description\": \"WebProNews coverage of Meta hiring Wei and Chung\",\n    \"timestamp\": 1755280729802\n  },\n  {\n    \"id\": \"mYLUXM43mk\",\n    \"url\": \"https://x.com/\\\\_jasonwei/status/1834278706522849788\",\n    \"description\": \"Jason Wei's social media post about o1\",\n    \"timestamp\": 1755280729802\n  }\n]"},{"id":"linkedin_profile","value":"https://www.linkedin.com/in/jason-wei-5a7323b0/"},{"id":"website","value":"https://www.jasonwei.net/"},{"id":"twitter_profile","value":"https://x.com/_jasonwei"},{"id":"instagram_profile","value":"https://www.instagram.com/jason.w.wei/"},{"id":"commit-message","value":"\"Published Jason Wei's wiki\""}],"events":[{"id":"1a26a786-0414-4795-be3e-2cd2c9e9965b","date":"2021-02","title":"Joined Google Brain","type":"DEFAULT","description":"Began working as a research scientist at Google Brain, where his work helped popularize chain-of-thought prompting, instruction tuning, and emergent phenomena in large language models.","multiDateStart":null,"multiDateEnd":null},{"id":"11bb8726-e546-4a8c-9ffb-ac4e139d9014","date":"2023-02","title":"Joined OpenAI","type":"DEFAULT","description":"Joined OpenAI's research team, contributing to projects focused on large language models and AI reasoning, including work on the ChatGPT team.","multiDateStart":null,"multiDateEnd":null},{"id":"793da490-f2f3-4040-b275-239c55b4b625","date":"2024-09","title":"Co-created OpenAI o1","type":"DEFAULT","description":"Co-created the OpenAI o1 model, designed to spend more time thinking before responding to reason through complex tasks, and also contributed to deep research initiatives.","multiDateStart":null,"multiDateEnd":null},{"id":"5578d058-e0ab-45c9-bfc0-6cbfb582b147","date":"2025-07","title":"Joined Meta Superintelligence Labs","type":"DEFAULT","description":"Departed OpenAI to join Meta's newly formed Superintelligence Labs, continuing his research on advanced artificial intelligence.","multiDateStart":null,"multiDateEnd":null}],"user":{"id":"0x8af7a19a26d8fbc48defb35aefb15ec8c407f889"},"author":{"id":"0x8af7a19a26d8fbc48defb35aefb15ec8c407f889"},"language":"en","version":1,"linkedWikis":{"blockchains":[],"founders":[],"speakers":[]}}