Associative learning is a mechanism ubiquitous throughout human and animal cognition, but which is absent in ACT-R 6. Previously, ACT-R 4 had implemented a Bayesian learning algorithm which derived the strength of association between two items based on the likelihood that one item was recalled in the context of the other (versus being recalled outside of this context). This algorithm suffered from asymmetries which tended to lead all associations to become strongly inhibitory the longer a model ran. Instead, we present a Hebbian learning algorithm inspired by spiking neurons and the Rescorla-Wagner model of classical conditioning, and show how this mechanism addresses asymmetries in the prior Bayesian implementation. In addition, we demonstrate that balanced learning of both positive and negative associations is not only neurally- and behaviorally-plausible, but has benefits in both learning and in constraining representational complexity. This is demonstrated using a simple model of list learning derived from Anderson et al. (1998).