Humans have an extraordinary ability to learn new concepts from just a few examples (few-shot learning), but the cognitive mechanism behind this ability remains mysterious. In a recent article that we published in PNAS, we show how the geometry of neural representations governs the ability of simple neural circuits to learn new concepts. We introduce a mathematical theory which identifies key measurable geometric properties of the neural code, which can be used to predict few-shot learning performance. We further show that both primate visual cortex and artificial deep neural networks (DNNs) progressively reformat the geometry of neural representations so that new concepts can be learned more flexibly. Our investigation reveals intriguing similarities, but also striking differences, between the geometry of neural codes in primate cortex and in DNNs.