How language users become able to produce forms they have never encountered in input is central to our understanding of language cognition. A range of models, including rule-based models, stochastic models, and analogy-based models have been proposed to account for this ability. Despite the fact that all three models are reasonably successful, we argue that productivity is more insightfully captured through learnability than by rules or probabilities. Using a combination of computational modelling and behavioural experimentation we show that the basic principle of error-driven learning allows language users to detect relevant patterns of any degree of systematicity. In case of allomorphy, these patterns are found at a level that cuts across phonology and morphology and is not considered by mainstream approaches to language. Our findings thus highlight how a learning-based approach applies to phenomena on the continuum from rule-based over probabilistic to “unruly” and constrains our inferences about the types of structures that should be targeted on a cognitively realistic account of language representation.