We study and compare two different approaches to the task of automatic assignment of predefined classes to clinical freetext narratives. In the first approach this is treated as a traditional mention-level named-entity recognition task, while the second approach treats it as a sentencelevel multi-label classification task. Performance comparison across these two approaches is conducted in the form of sentence-level evaluation and state-of-theart methods for both approaches are evaluated. The experiments are done on two data sets consisting of Finnish clinical text, manually annotated with respect to the topics pain and acute confusion. Our results suggest that the mentionlevel named-entity recognition approach outperforms sentence-level classification overall, but the latter approach still manages to achieve the best prediction scores on several annotation classes.