Sequence learning is an important process involved in many
cognitive tasks, and is probably one of the most important
processes governing music processing. In this work we build
and evaluate computational models addressed to solve a tonesequence
learning task in a framework which simulates forcedchoice
tasks experiments. The specific approach we have selected
is that of Artificial Neural Networks in an on-line setting,
which means the network weights are always updated
when new events are presented.
Here, we aim at simulating the findings obtained by Saffran,
Johnson, Aslin, and Newport (1999). We propose a validation
loop that follows the experimental setup that was used
with human subjects, in order to characterize the networks’
accuracy to learn the statistical regularities of tone sequences.
Tone-sequence encodings based on pitch class, pitch class intervals
and melodic contour are considered and compared. The
experimental setup is extended by introducing a pre-exposure
forced-choice task, which makes it possible to detect an initial
bias in the model population prior to exposure. Two distinct
models (i.e. Simple Recurrent Network or a Feedforward Network
with a time window of one event) lead to similar results.
We obtain the most consistent learning behavior using an encoding
based on Pitch Classes, which is not a relative representation.
More importantly, our simulations and additional
behavioral experiments highlight the impact of tone sequence
encoding in both initial model bias and post-exposure discrimination
accuracy. Furthermore, we suggest that melodic encoding
and representation should be further investigated when inspecting
and modeling behavioral experiments involving musical