More conceptual
  1. Wahlster W (1998) Intelligent User Interfaces: An Introduction. In Maybury, Wahlster: Readings in Intelligent User Interfaces
  2. Lieberman H (2009) User interface goals, AI opportunities. AI Mag 30(4):16–22
  3. Horvitz E (1999) Uncertainty, action, and interaction: In pursuit of mixed-initiative computing. IEEE Intell Syst 14:17–20
  4. Maybury MT, Stock O, Wahlster W (2006) Intelligent interactive entertainment grand challenges. In: Proc of IEEE intelligent systems pp. 14–18
  5. Rich C, Sidner CL, Lesh N (2001) Collagen: applying collaborative discourse theory to human-computer interaction. AI Mag 22(4):15–26
  6. Oviatt S (1999) Ten myths of multimodal interaction. Commun ACM 42(11):74–81 
  7. Sonntag, D (2012) Collaborative Multimodality, KI - German Journal on Artififical Intelligence 26 (2):161–168
  8. McGuiness, D. (2004) Question Answering on the Semantic Web,  IEEE Intelligent Systems 19(1):82–85
  9. Lieberman H, Liu H, Singh P, Barry B (2004), Beating common sense into interactive applications, AI Magazine 25(4):63–76. AAAI Press.
  10. Horvitz E, Kadie C, Paek T, and Hovel D (2003) Models of attention in computing and communication: from principles to applications. Commun. ACM 46, (3):52–59.
More technical
  1. Sarwar B, Karypis G, Konstan J, Riedl J (2001) Item-based collaborative filtering recommendation algorithms. WWW 2001: 285-295
  2. Gajos KZ, Wobbrock JO, Weld DS (2008) Improving the performance of motor-impaired users with automatically-generated, ability-based interfaces. In: Proceeding of SIGCHI (CHI ’08). ACM, New York, pp 1257–1266 (Video)
  3. Prasov Z, Chai J Y: What's in a in a gaze?: the role of eye-gaze in reference resolution in multimodal conversational interfaces. In Proceedings of the 13th international conference on Intelligent user interfaces, IUI ’08, ACM (New York, NY, USA, 2008), 20–29.
  4. Parikh D, Kovashka A, Parkash A, and Grauman K: Relative attributes for enhanced human-machine communication. In AAAI Conference on Artificial Intelligence (2012). 
  5. Toyama T, Kieninger T, Shafait F, and Dengel A: Gaze guided object recognition using a head-mounted eye tracker. In Proceedings of the Symposium on Eye Tracking Research and Applications, ETRA ’12, ACM (New York, NY, USA, 2012), 91–98.
  6. Orlosky J, Kiyokawa K, and Takemura H: Dynamic text management for see-through wearable and heads-up display systems. In Proceedings of IUI ’13, pages 363–370, New York, NY, USA, 2013. ACM.
  7. Sonntag D,  Weber M, Hammon M, Cavallaro A, (forthcoming) Integrating Pens in Breast Imaging for Instant Knowledge Acquisition (extended version), AI Magazine, 2013
  8. Mahmud J, Zhou M, Megiddo N, Nichols J, and Drews C (2013). Recommending targeted strangers from whom to solicit information on social media. In Proceedings of the 2013 international conference on Intelligent user interfaces (IUI '13). ACM
  9. Liu Q, Liao C, Wilcox L, Dunnigan A, and Liew B (2010) Embedded media markers: marks on paper that signify associated media. In Proceedings of the 15th international conference on Intelligent user interfaces (IUI '10). ACM
  10. Fader A, Soderland S, and Etzioni O (2011). Identifying relations for open information extraction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP '11), pages 15351545. (The NLP paper, backend technology)
  11. Morency LP, Sidner C, Lee C, and Darrell T (2007) Head gestures for perceptual interfaces: The role of context in improving recognition. Artif. Intell. 171(8-9):568585. (The ML paper)
  12. Dinakar K, Jones B, Havasi C, Lieberman L, and Picard R (2012) Commonsense Reasoning for Detection, Prevention and Mitigation of Cyberbullying, Transactions on Intelligent Interactive Systems (ACM TiiS) 2(3): 130
  13. Horvitz E, Breese J, Heckerman D, Hovel D, and Rommelse K (1998) The lumière project: Bayesian user modeling for inferring the goals and needs of software users. In Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence (UAI'98), CA, USA, 256-265.