Learning from examples to count domains in one-dimensional patterns is studied. Increasing the number of examples used for training a network to perform the task is equivalent to the annealing of a one-dimensional Ising model. The generalization error falls off exponentially with the number of examples per weight. The related contiguity problem, where the network discriminates between patterns with small and large number of domains, exhibits a first-order phase transition to perfect generalization at all temperatures. Monte Carlo simulations of both models are in very good agreement with the theoretical predictions.