logo

MATE Deliverable D1.1

Supported Coding Schemes

4. Dialogue Acts

Responsibility: Marion Klein, Claudia Soria



Introduction
Scheme Comparison
Conclusion


 

Introduction

Dialogue acts, also called dialogue moves or illocutionary acts, are the basic elements of human communication rather than words or sentences. A dialogue is divided into units called turns which refer to speaker changes. A turn again consists of several utterances which are also called segments.

Dialogue act annotation schemes are used to mark important characteristics of utterances. These annotations indicate the role of an utterance in a specific dialogue and make the relationship between utterances more obvious.

Most of the dialogue schemes nowadays are task-oriented as we will see later on in this report. This is due to reduce the amount of annotation tags to a capable size for annotation and to increase the analyzing rate of the NLP system in which the scheme is used. The information content (or the semantics) of task-oriented dialogues can be basically splitted into task / domain related information and information that addresses the communication process. To guarantee generality and therefore more flexibility both information levels should be kept separately in the notion-choice of tags. Schemes which cover only those two fields are said to be schemes for rather shallow analysis.

As an example of a scheme that allows deep analysis DAMSL can be mentioned. With its forward and backward looking functions it keeps track of how an utterance constrains the future beliefs and actions of the participants, and affects the discourse and how an utterance relates to the previous discourse, respectively.
 

Scheme Comparison

First of all the schemes which have been observed are listed below together with information about their developer and the domain in which they are used. Further details about the schemes are given in the Annexes. Not all of the above mentioned schemes are suitable to take under consideration for the MATE project. This might be because they are designed for a special task / domain and hence too much restricted.

Another reason might be that they are not used very much which could, for example, lead to the assumption that they are too complicated.

However, in the following some criteria for schemes are detailed which can be used to scale the observed schemes.

The results of the guidelines together with the criteria outlined above are applied to the observed schemes in the tables below:
 
 
Schemes
ALPARON
CHAT
CHIBA
COCONUT
Coding Book
yes yes yes yes
Annotators Number 3 huge 10 2
Expertise experts experts experts  experts
Information about annotated dialogues Size 500 dialogues 160MB 22 dialogues 16 dialogues
Languages Dutch many Japanese English
Participants 2 2 2 2
Task Orientation TD (NTD) TD TD
Application Orientation yes no no yes
Domain Restriction DES no DIR, BA, TR FUR
Activity Type IE CH CN, PS CN
Human / Machine Part. HH, MM HH, NMM HH, NMM(?) HH, MM (computer)
Evaluation
yes (77% agreement) no yes (0.57 < alpha < 0.68) yes
Mark-up Language
yes, own yes, own yes, SGML-like yes, Nb
Annotation Tools
yes, OVR coder yes yes, modification of dat yes, Nb
Usability
yes no ? yes

 
 
 
Schemes
CONDON &

CECH

C-STAR
DAMSL
FLAMMIA
Coding Book
yes yes yes yes
Annotators Number 5 5 4 7
Expertise fairly experts experts experts  trained
Information about annotated dialogues Size 88 dialogues 230 dialogues 18 dialogues 25 dialogues
Languages English Engl., Jap.,

Kor., It.

English English
Participants 2 2 2 2
Task Orientation TD TD NTD TD
Application Orientation yes yes no yes
Domain Restriction TS TR no DES
Activity Type CN CN several IE
Human / Machine Part. HH, MM, NMM HH HH HH, MM
Evaluation
yes (91% agreement) no yes, k =0.56 yes, k =0,6+
Mark-up Language
yes, Nb?s yes yes, DAMSL yes
Annotation Tools
yes, Nb no yes, dat yes
Usability
yes yes yes ?

 
 
 
 
Schemes
JANUS
LINLIN
MAPTASK
NAKATANI
Coding Book
yes yes yes yes
Annotators Number 4 4 6
Expertise experts experts experts naive
Information about annotated dialogues Size many 140 dialogues 128 dialogues 72 dialogues
Languages English Swedish English English
Participants 2 2 2 1
Task Orientation TD TD TD TD
Application Orientation yes yes yes no
Domain Restriction BA TR/TS DIR INSTR
Activity Type CN IE PS TI
Humpan / Machine

Part.

HH HM, NS HH, NMM HH, NMM
Evaluation
yes (89% agreement) yes (97% agreement) Yes, k =0.83 no
Mark-up Language
yes, own yes, Nb?s Yes, own SGML based yes, Nb?s
Annotation Tools
no yes, Nb Yes, own yes, Nb
Usability
yes yes yes yes

 
 
 
 
Schemes
SLSA
SWBD-DAMSL
TRAUM
VERBMOBIL
Coding Book
yes yes yes yes
Annotators Number 7 9 3 3
Expertise experts experts experts naive
Information about annotated dialogues Size 100 dialogues 1155 dialogues 36 dialogues 1172 dialogues
Languages Swedish English English Eng., Jap., Ger.
Participants 2 (?) 2 2 2
Task Orientation TD NTD NTD TD
Application Orientation yes no yes yes
Domain Restriction COU no no BA
Activity Type several several CN CN
Human / 

Machine Part.

HH, NMM HH, MM HH, NM HH, NMM
Evaluation
yes (not published) yes, 0.8 < k < 0.84 yes (not published) yes, k =0.84
Mark-up Language
yes, own yes, variant of DAMSL yes, Nb?s yes, own
Annotation Tools
yes, TRACTOR no yes, Nb yes, AnnoTag
Usability
yes yes yes yes

In order to develop a standard it is necessary to compare schemes with regard to their underlying task and dialogue acts. The following tables present domain-grouped schemes and show the equivalence between their dialogue acts. Entries with italic font represent higher order expressions which can't be annotated.
 


Domain: information retrieval

Alparon
Flammia?s
LinLin
Moves
(Dialogue Acts)
Speech Acts Initiative
Statement
-
Update
Question

Check

Alignment

Question-Confirm Question
-
Response Response
Clarification
-
Answer
-
Confirm

Accept

Reject

-
Acknowledgement

Reconfirmation

Acknowledge

Repeat

-
Greeting

Bye

-
Discourse Management

Opening

Ending

Continuation

Pause
-
-
Other
-
-

Domain: route direction

Chiba
Maptask
Initiation Initiating moves
Inform

Other assertion

Explain
Yes-no question

Wh-question

Query-yn

Query-w

Check

Align

Request

Suggest

Persuasion

Propose

Demand

Instruct
Promise
-
Response Response moves
Positive

Negative

Answer

Other response

Reply-y

Reply-n

Reply-w

Clarify

Hold

Confirm

-
-
Acknowledge
Follow-up

Understanding

-
Conventional

Opening

Closing

-
Other initiation
-

Domain: appointment scheduling

Chiba
Verbmobil
Initiation Dialogue_Act
Inform

Other assertion

Inform

Init

Give-reason

Digress

Deviate_Szenario

Refer_to_setting

Yes-no-question

Wh-question

-
Request

Suggest

Persuasion

Propose

Demand

Suggest

Request

Request_Suggest

Request_Clarify

Request_Comment

Promise
-
Response
-
Positive

Negative

Answer

Other response

Feedback

Feedback_Positive

Feedback_Negative

Feedback_Backchanneling

Hold

Confirm

Accept

Confirm

Reject

Explained_Reject

Clarify

Clarify_Answer

Follow-up

Understanding

-
Conventional

Opening

Closing

Convention

Thank

Deliberate

Introduce

Politeness_Formula

Greeting

Greeting_Begin

Greeting_End

Other initiation Not_Classifiable

Domain: general

DAMSL
SWBD-DAMSL
Traum?s
Chat
Forward looking function Forward Communicative -Function Illocutionary Function Categories of Illocutionary Force
Statement

Assert

Reassert

Other

Statement

Statement-no-opinion

Statement-opinion

Inform

Supp-Inf

Supp-Sug

Statement:

AC, CN, DW, ST, WS

Declarations: 

DC, DP

Info-Request Influencing-Addressee-Future-Action (1)

Yes-No-Question

Wh-Question

Or-Clause

Declarative-Yes-No-Question

Declarative-Wh-Question

Tag-Question

Backchannel-in-Question

Rhetorical-Question

YNO

WHQ

Questions:

AQ, AA, AN, EQ, NA, QA, QN, RA, SA, TA, TQ, YQ, RQ

Influencing-Addressee-Future-Action

Action-directive

Open-Option

Influencing-Addressee-Future-Action (2)

Open-Question

Action-Directive

Request

Suggest

Directives (1):

RP, RQ

Committing-Speaker-Future-Action

Offer

Commit

Explicit-performative

Exclamation

Committing-Speaker-Future-Action

Offers

Options Commits

Offer Commitments:

FP, PF, SI, TD

Directives (2):

CL, SS

- - Promise PD
Backward looking function Backwards-Communicative-Function - -
Answer Answer

Yes Answer

No Answer

Affirmative non-yes-answer

Negative non-no answer

Other answer

Dispreferred answers

Eval Evaluations:

AB, CR, DS, ED, ET, PM

Directives (3):

AC

Agreement

Accept

Accept-part

Maybe

Reject

Reject-part

Hold

Agreement

Agree/Accept

Maybe / Accept-part

Reject

Hold before answer/agreement

Accept

Reject

Check

Directives (4):

AD, AL, CS, RD, GI, GR, DR

Declarations (2):

ND, YD

Understanding Understanding Grounding -
- - RequestAck -
Signal-understanding

Acknowledge

Repeat-rephrase

Completion

Response-Acknowledgement

Repeat-phrase

Collaborative-Completion

Acknowledge

Summarize/Re-formulate

Appreciation

Downplayer

Acknowledge Speech Elicitations:

CX, EA, EI, EC, EX, RT, SC

Signal-Non-Understanding Signal-non-understanding Request-Repair Demands for clarification:

RR

Correct-Misspeaking   Repair Text editing:

CT

- Other-forward-function

Conventional-opening

Conventional-closing

Thanking 

Apology

Greet

Apologise

-
- - - -
- Other

Quotation

Hedge

- Vocalisation:

YY, OO

- - - Markings: 

CM, EM, EN, ES, MK, TO, XA

- - - Performances:

PR, TX


 


Conclusion

The huge amount of coding schemes detailed in the Annexes shows the current research interests in dialogue act annotation schemes. There also seems to be a trend to shallow, task-oriented annotation as these schemes predominate those which focus on a general approach. The comparison of dialogue acts of schemes with equivalent domains reflect the similarities expected. But more surprisingly even a dialogue act comparison among all schemes regardless to their orientation shows quite a lot of parallelism although the general schemes are, of course, more comprehensive.

In order to decide, which schemes should be taken under consideration in MATE, schemes should have a coding book, they should be heavily used and should have good evaluation results. Also a scheme that is not related to a special task seems to be more appropriate as a task related and therefor possibly restricted one. If we look at the general comparison of schemes above, one can observe that all listed schemes provide a coding book. Amongst the schemes which are mostly used we can see Alparon, Chat, SWBD-DAMSL and Verbmobil. Unfortunately Chat hasn't been evaluated, but Alparon, SWBD-DAMSL and Verbmobil are judged to be good. As SWBD-DAMSL is the only one of these schemes which is not task related, it should be definitely supported in MATE. With regard to the MATE standard, of course, the dialogue acts of all schemes should be taken into account and analysed.

Last Modification: 26.8.1998 by Marion Klein