In: IEEE International Conference on Smart Computing. International Conference on Smart Computing (Smartcomp-2019), June 12-14, Washington DC, DC, USA, IEEE, 2019.
Zusammenfassung
Monitoring of human activities is an essential capability of many smart systems. In recent years much progress has been achieved. One of the key remaining challenges is the availability of labeled training data, in particular taking into account the degree of variability in human activities. A possible solution is to leverage large scale online data repositories. This has been previously attempted with image and sound data, as both microphones and cameras are widely used sensing modalities. In this paper, we describe a first step towards the use of online, text-based activity descriptions to support general sensor-based activity recognition systems. The idea is to extract semantic information from online texts about the way complex activities are composed of simple ones that have to be performed (e.g. a manual for assembling a furniture piece) and use such a semantic description in conjunction with sensor based, statistical classifiers of basic actions to recognize the complex activities and compose them into semantic trees.
Extraction of domain relevant information evaluated in 11 different text-based manuals from different domains reached an average recall of 77%, and precision of 88%. Actual structural error-rate in the construction of respective trees was around 1%.
@inproceedings{pub10474,
author = {
Krupp, Lars
and
Grünerbl, Agnes
and
Bahle, Gernot
and
Lukowicz, Paul
},
title = {Towards Automatic Semantic Models by Extraction of Relevant Information from Online Text},
booktitle = {IEEE International Conference on Smart Computing. International Conference on Smart Computing (Smartcomp-2019), June 12-14, Washington DC, DC, United States},
year = {2019},
organization = {IEEE},
publisher = {IEEE}
}
Deutsches Forschungszentrum für Künstliche Intelligenz German Research Center for Artificial Intelligence