Normative Accounts of Capacity Constraints Associated with Cognitive Control: musslick2021rationalizing.pdf
Attention and Binding: Treisman 1998
Musslick et al. 2023.pdf
Principled Account of Capacity Constraints in Terms of Representation: CoC_draft_JDC - revised clean.pdf
Episodic vs. Working Memory. Beukers AO, Hamin M, Norman KA & Cohen JD (2024). When working memory may be just working, not memory. Psychological Review, 131(2), 563.
* Semantics, Context and Control. Giallanza T, Campbell D, Cohen JD & Rogers TT (2024). An integrated model of semantics and control. Psychological Review.
Semantics, Control and Context Inference. Giallanza T, Rogers TT & Cohen JD. (under review). An integrated model of semantics and control, Part 2: Solving the similarity paradox through context inference.
External memory in neural networks: The Neural Turing Machine. Graves, A. (2014). Neural Turing Machines. arXiv preprint arXiv:1410.5401.
Episodic Memory and Abstraction. Webb TW, Sinha I & Cohen JD (2021). Emergent symbols through binding in external memory. ICLR 2021: Proceedings of the International Conference on Learning Representations.
Relational Bottleneck. Webb TW, Frankland SM, Altabaa A, Segert S, Krishnamurthy K, Campbell D, Russin J, Giallanza T, Dulberg Z, Reilly RO, Lafferty J & Cohen JD (2024). The Relational bottleneck as an inductive bias for efficient abstraction. Trends in Cognitive Science, 28(9):829-843. doi: 10.1016/j.tics.2024.04.001
* Episodic Generalization and Control. Giallanza T, Campbell D & Cohen JD (2024). Toward the emergence of intelligent control: Episodic generalization and optimization. OpenMind, 8, 688-722.
* Connectionist vs. Symbolic Processing. Fodor, J. A., & Pylyshyn, Z. W. (1988). Connectionism and cognitive architecture: A critical analysis. Cognition, 28(1-2), 3-71.
Connectionist Model of Symbol Processing. Touretzky, D. S., & Hinton, G. E. (1985). Symbols among the neurons: Details of a connectionist inference architecture. In IJCAI (Vol. 85, pp. 238-243).
Analogical Reasoning and LLMs. Webb, T., Holyoak, K.J. & Lu, H. Emergent analogical reasoning in large language models. Nat Hum Behav 7, 1526–1541 (2023).
In-context Learning and Induction Heads. Olsson, C., Elhage, N., Nanda, N., Joseph, N., DasSarma, N., Henighan, T., … & Olah, C. (2022). In-context learning and induction heads. arXiv preprint arXiv:2209.11895.
Tensor Product Representations. Plate, T. A. (1995). Holographic reduced representations. IEEE Transactions on Neural networks, 6(3), 623-641.
Tensor Product Representations and Symbolic Processing. Smolensky, P. (1990). Tensor product variable binding and the representation of symbolic structures in connectionist systems. Artificial intelligence, 46(1-2), 159-216.
Tensor product representations and Deep Learning. McCoy, R. T., Linzen, T., Dunbar, E., & Smolensky, P. (2018). RNNs implicitly implement tensor product representations. arXiv preprint arXiv:1812.08718.
LLM content effects. Dasgupta, I., Lampinen, A. K., Chan, S. C., Creswell, A., Kumaran, D., McClelland, J. L., & Hill, F. (2022). Language models show human-like content effects on reasoning. arXiv preprint arXiv:2207.07051, 2(3).