Advances in mobile touchscreen computing offer new opportunities to test traditional cognitive architectures and modeling tools in a novel task domain. ACT-Touch, an extension of the ACT-R 6 (Adaptive Control of Thought-Rational) cognitive architecture, seeks to update and expand methods for modeling touch and gesture in today’s increasingly mobile computing environment. ACT-Touch adds new motor movement styles to the existing ACT-R architecture (such as tap, swipe, pinch, reverse-pinch and rotate gestures) and also includes a simulated multi-touch touchscreen device with which models may interact. An ACT-Touch model was constructed to explore the nature of human errors qualitatively observed during previously conducted formative usability testing, where participants occasionally missed taps on a particular interface button while completing a biometric sensor configuration task on a tablet computer. Due to features unique to the mobile touchscreen environment—finger size relative to target size—these objectively small errors in motor movement combined with interface usability issues to produce disproportionately large effects on cognition and task performance. This finding improved both the interface (practical application) and the model (theory).