{"id":"jack-rae","title":"Jack Rae","content":"**Jack Rae** is a distinguished scientist known for his work on large language models (LLMs), compression, and reinforcement learning. He is currently part of Meta's Superintelligence Labs team.\n\n## Education\n\nJack Rae completed a Doctor of Philosophy (Ph.D.) in Computer Science at University College London (UCL) between 2016 and 2020. His academic work explored mechanisms for lifelong reasoning, with a focus on memory models utilizing sparse and compressive structures.\n\nIn 2013, he obtained a Master of Science in Statistics from Carnegie Mellon University, where his academic performance was evaluated with a GPA of 4.1 on a 4.3 scale. During his time at the institution, he participated in student groups, including the Explorer’s Society and the CMU Cycling Club.\n\nEarlier in his academic trajectory, Rae earned a Master of Science in Mathematics and Computer Science from the University of Bristol. He studied there from 2008 to 2012 and graduated with First Class Honours. While enrolled, he was also affiliated with the university’s cycling club. [\\[10\\]](#cite-id-h5ky3dJlxQ) [\\[11\\]](#cite-id-dMlRSinKcc)\n\n## Career\n\nJack Rae has held positions at several prominent technology companies focusing on artificial intelligence research and development. He previously worked at Quora before spending approximately seven and a half years at Google DeepMind. During his tenure at Google DeepMind, he served as a pre-training technical lead for the Gemini model and spearheaded the development of reasoning capabilities for Gemini 2.5. His work at Google also included contributions to models such as Gopher and Chinchilla.\n\nIn July 2022, Rae announced he was joining OpenAI. He later moved to Meta, announcing his excitement about joining the company in June, where he is a Distinguished Scientist focusing on LLMs, Compression, and Reinforcement Learning within the Superintelligence Labs.\n\nThroughout his career, Rae has contributed to the development of several significant AI models. At Google DeepMind, he was involved with the Gopher and Chinchilla LLMs and played a key role in the Gemini project, specifically leading pre-training efforts and reasoning development for Gemini 2.5. He also commented on updates to Gemini 2.0 Flash Thinking, noting improved performance and capabilities like long-context and code execution. [\\[1\\]](#cite-id-szzuGmJKTB) [\\[2\\]](#cite-id-sEPhV1ranV) [\\[3\\]](#cite-id-BTt4PtSXiD) [\\[4\\]](#cite-id-h3ECZHLY8E) [\\[5\\]](#cite-id-JfTHnqptPo)\n\n## Views on AI\n\nRae has publicly shared his perspectives on trends and developments in artificial intelligence. He has commented on the recurring notion that \"deep learning is hitting a wall,\" often doing so in a celebratory tone on specific dates. He has also discussed the \"bitter lesson,\" suggesting that much of the research from decades of dialogue publications did not directly lead to models like ChatGPT, highlighting a focus shift away from traditional methods like slot filling, intent modeling, sentiment detection, and hybrid symbolic approaches. Rae has also expressed views on the potential emergence of Artificial General Intelligence (AGI), commenting on specific demonstrations of advanced AI capabilities. [\\[6\\]](#cite-id-2TZf0HJ5lt) [\\[7\\]](#cite-id-utatfCPIEm) [\\[8\\]](#cite-id-6wFr2Ra2Y2) [\\[9\\]](#cite-id-I9tpK0ptNE)\n\n## **Interviews**\n\n### Discussion on Gemini 2.5 Pro and AI Research #01\n\nOn April 5, 2025, Jack Rae appeared on the YouTube channel *Cognitive Revolution* to discuss developments in large language models, focusing on Gemini 2.5 Pro. As Principal Research Scientist at Google DeepMind and technical lead for inference-time reasoning and scaling, Rae outlined key engineering strategies and research directions shaping current AI systems.\n\nHe described Gemini 2.5 Pro as the product of ongoing refinements in architecture and training, noting that its ability to handle input contexts of hundreds of thousands of tokens reflects gradual progress rather than sudden breakthroughs. These advancements were attributed to collaborative efforts and scaling practices within DeepMind.\n\n$$widget0 [YOUTUBE@VID](https://youtube.com/watch?v=u0iIPxfwjKU)$$\n\nRae also commented on the convergence among AI labs around reasoning techniques such as chain-of-thought prompting, suggesting that shared challenges and resource environments contribute to similar outcomes. He discussed the role of reinforcement learning based on correctness signals in improving model reasoning, emphasizing its incremental evolution over time.\n\nThe conversation addressed challenges in interpretability, particularly the difficulty of analyzing internal model processes. Rae highlighted ongoing work in mechanistic interpretability aimed at improving transparency in models using complex reasoning paths.\n\nRegarding the path toward artificial general intelligence (AGI), Rae identified areas such as long-term memory, multimodal learning, and agent behavior as current research priorities. He mentioned that Gemini 2.5 Pro’s long-context capabilities enable interaction with extended inputs, such as large codebases or documents, without the need for summarization.\n\nHe also noted that model deployment involves trade-offs, including compute limitations and user experience design, which shape how systems are used in practice. Throughout the interview, Rae emphasized the importance of iteration, scaling, and empirical testing in the development of language models. [\\[12\\]](#cite-id-YbdA3nv3mG)","summary":"Jack Rae is a Distinguished Scientist at Meta, working within their Superintelligence Labs. He is known for his work on large language models like Gopher, Chinchilla, and Gemini, having previously held roles at Google DeepMind, OpenAI, and Quora.","images":[{"id":"QmRbaGEBAYQiZXQ4ZiQDcTv54f175Z7h6HKhWwSTBtvXxa","type":"image/jpeg, image/png"}],"categories":[{"id":"people","title":"people"}],"tags":[{"id":"Developers"}],"media":[{"id":"Qmd1KuwRdTcqWU47wQGCcDnuwvhWu3fMJVrH6q63SDvUap","name":"1517356441752.jpeg","caption":"","thumbnail":"Qmd1KuwRdTcqWU47wQGCcDnuwvhWu3fMJVrH6q63SDvUap","source":"IPFS_IMG"},{"id":"QmSMqoCBZ57YafW5bS3kq4r7GyJkZwh81VPYBSE9WVXPHa","name":"YHobJbCS_400x400.jpg","caption":"","thumbnail":"QmSMqoCBZ57YafW5bS3kq4r7GyJkZwh81VPYBSE9WVXPHa","source":"IPFS_IMG"},{"id":"QmYTqCzmUkoj4a2fS1fzdBD71g4KGXETRs2K2AWZhJo3U5","name":"images.jpeg","caption":"","thumbnail":"QmYTqCzmUkoj4a2fS1fzdBD71g4KGXETRs2K2AWZhJo3U5","source":"IPFS_IMG"},{"id":"Qmeyf6Rw1nkWDaLo3guWACXqi35qdnm3G6RsvKNBhDwPxU","name":"citations.jpeg","caption":"","thumbnail":"Qmeyf6Rw1nkWDaLo3guWACXqi35qdnm3G6RsvKNBhDwPxU","source":"IPFS_IMG"},{"id":"https://www.youtube.com/watch?v=u0iIPxfwjKU","name":"u0iIPxfwjKU","caption":"","thumbnail":"https://www.youtube.com/watch?v=u0iIPxfwjKU","source":"YOUTUBE"}],"metadata":[{"id":"references","value":"[{\"id\":\"szzuGmJKTB\",\"url\":\"https://x.com/jack\\\\_w\\\\_rae/status/1546717243689422849\",\"description\":\"Jack Rae's X post about joining OpenAI\",\"timestamp\":1752258372048},{\"id\":\"sEPhV1ranV\",\"url\":\"https://www.reuters.com/business/zuckerbergs-meta-superintelligence-labs-poaches-top-ai-talent-silicon-valley-2025-07-08/\",\"description\":\"Reuters article on Meta hiring\",\"timestamp\":1752258372048},{\"id\":\"BTt4PtSXiD\",\"url\":\"https://x.com/jack\\\\_w\\\\_rae\",\"description\":\"Jack Rae's X profile\",\"timestamp\":1752258372048},{\"id\":\"h3ECZHLY8E\",\"url\":\"https://x.com/jack\\\\_w\\\\_rae/status/1939784714271039591\",\"description\":\"Jack Rae's X post about joining Meta\",\"timestamp\":1752258372048},{\"id\":\"JfTHnqptPo\",\"url\":\"https://x.com/jack\\\\_w\\\\_rae/status/1881850277692936233\",\"description\":\"Jack Rae's X post about Gemini 2.0 Flash Thinking\",\"timestamp\":1752258372048},{\"id\":\"2TZf0HJ5lt\",\"url\":\"https://x.com/jack\\\\_w\\\\_rae/status/1512107972494831616\",\"description\":\"Jack Rae's X post from Apr 2022\",\"timestamp\":1752258372048},{\"id\":\"utatfCPIEm\",\"url\":\"https://x.com/jack\\\\_w\\\\_rae/status/1766803741414699286\",\"description\":\"Jack Rae's X post from Mar 2024\",\"timestamp\":1752258372048},{\"id\":\"6wFr2Ra2Y2\",\"url\":\"https://x.com/jack\\\\_w\\\\_rae/status/1601044625447301120\",\"description\":\"Jack Rae's X post about the bitter lesson\",\"timestamp\":1752258372048},{\"id\":\"I9tpK0ptNE\",\"url\":\"https://x.com/jack\\\\_w\\\\_rae/status/1786177531743494407\",\"description\":\"Jack Rae's X post about AGI\",\"timestamp\":1752258372048},{\"id\":\"h5ky3dJlxQ\",\"description\":\"Education: Jack Rae\",\"timestamp\":1752258733797,\"url\":\"https://www.linkedin.com/in/jackrae/details/education/\"},{\"id\":\"dMlRSinKcc\",\"description\":\"Jack W Rae\\nGoogle\",\"timestamp\":1752258749278,\"url\":\"https://scholar.google.com/citations?user=uevMVaQAAAAJ&hl=en\"},{\"id\":\"YbdA3nv3mG\",\"description\":\"Scaling \\\"Thinking\\\": Gemini 2.5 Tech Lead Jack Rae on Reasoning, Long Context, & the Path to AGI\\n\",\"timestamp\":1752259174506,\"url\":\"https://www.youtube.com/watch?v=u0iIPxfwjKU\"}]"},{"id":"linkedin_profile","value":"https://www.linkedin.com/in/jackrae/"},{"id":"twitter_profile","value":"https://x.com/jack_w_rae"},{"id":"previous_cid","value":"\"https://ipfs.everipedia.org/ipfs/QmSZSJvQ1KzT8QHsJh66bmcSh2tuoPh15FxC7jigKQo4MH\""},{"id":"commit-message","value":"\"Republishing Jack Rae wiki\""},{"id":"previous_cid","value":"QmSZSJvQ1KzT8QHsJh66bmcSh2tuoPh15FxC7jigKQo4MH"}],"events":[],"user":{"id":"0x8af7a19a26d8fbc48defb35aefb15ec8c407f889"},"author":{"id":"0x8af7a19a26d8fbc48defb35aefb15ec8c407f889"},"language":"en","version":1,"linkedWikis":{"blockchains":[],"founders":[],"speakers":[]}}