• 沒有找到結果。

Two-Layer Mutually Reinforced Random Walk

N/A
N/A
Protected

Academic year: 2022

Share "Two-Layer Mutually Reinforced Random Walk"

Copied!
1
0
0

加載中.... (立即查看全文)

全文

(1)

Two-Layer Mutually Reinforced Random Walk

for Improved Multi-Party Meeting Summarization

Yun-Nung (Vivian) Chen and Florian Metze

1. Summary

Idea:

o Important utterances are topically similar to each other

o Utterances similar to important speakers should be more important

Approach for extractive summary

o Construct a two-layer graph to represent 1) the utterance nodes in utterance-layer

2) the speaker nodes in speaker-layer

o Mutually propagate importance scores via

within-layer edges and between-layer edges

• Basic Idea: high importance means

 Utterances with higher original score

 Utterances topically/lexically similar to the indicative utterances

Utterances similar important speakers’

utterances

5. Experiments

• Graph-based approaches can improve speech summarization performance

• Two-layer approaches involving speaker information can get further improvement

• Topical similarity is more robust to recognition errors

 better for ASR transcripts

• Lexical similarity is more accurate when absence of errors

 better for manual transcripts

• Our proposed approaches achieve more than 7% relative improvement compared to the baseline

6. Conclusions

• Dataset: 10 meetings from CMU Speech Group, #Speaker: 6 (total), 2-4 (each), WER = 44%

• Parameter setting: α = 0.9, summary ratio = 30%

3. Two-Layer Mutually Reinforced Random Walk

• Speaker-Layer

o Node: speakers in a document

combine all utterances from the

same speaker as the speaker node

o Edge weight (red, green): TF-IDF cosine similarity

2. Graph Construction

 The utterances topically similar to more

important utterances should be more important

F-measure

Topic Model

(ex. PLSA, LDA)

Multi-Party Meeting

Corpus

ASR Manual

Mutually Reinforced Random Walk

Re-Rank

U5 93 U2 88 U3 75

:

Summary

Latent Topic Entropy

(Baseline)

U5 98 U4 85 U3 73

:

Summary

Baseline Part

Re-Rank Part

Flowchart of proposed approach

U1

U2

U3 U4

U5

U6

U7

Utterance-Layer

S2

Speaker-Layer

S1 S3

• Utterance-Layer

o Node: utterances in a document

o Edge weight (blue):

topical/lexical similarity

 The utterances from the similar speaker can partially share the importance

• Similarity Matrix

LUU: utterance-to-utterance relation (topical/lexical similarity) LSS: speaker-to-speaker relation (TF-IDF cosine similarity)

LUS: utterance-to-speaker relation (TF-IDF cosine similarity) LSU: speaker-to-utterance relation (TF-IDF cosine similarity)

• Two-Layer MRRW-BP (Between-Layer Propagation)

utterance score at (t+1)-th iteration

original importance of utterance

scores propagated from speaker-layer

speaker score at (t+1)-th iteration

equal weight scores propagated from utterance-layer

U1

U2

U3 U4

U5

U6

U7 Utterance-Layer

S2 Speaker-Layer

S1 S3

• Two-Layer MRRW-WBP (Within- and Between-Layer Propagation)

scores propagated from speaker-layer then propagated within utterance-layer

scores propagated from utterance-layer then propagated within speaker-layer

U1

U2

U3 U4

U5

U6

U7

Utterance-Layer

S2 Speaker-Layer

S1 S3

 Utterance node U can get higher score when

 Higher original importance

 More speaker nodes similar to utterance U

 Utterance node U can get higher score when

 Higher original importance

 More speaker nodes similar to utterance U

 More important utterances similar to utterance U

46 46.5 47 47.5 48 48.5 49 49.5 50 50.5

Baseline: LTE RandomWalk (LexSim)

RandomWalk (TopicSim)

Two-Layer MRRW-BP

Two-Layer MRRW-WBP

(LexSim)

Two-Layer MRRW-WBP

(TopicSim)

ROUGE-1 (ASR)

Baseline: LTE RandomWalk (LexSim)

RandomWalk (TopicSim)

Two-Layer MRRW-BP

Two-Layer MRRW-WBP

(LexSim)

Two-Layer MRRW-WBP

(TopicSim)

ROUGE-L (ASR)

44 44.5 45 45.5 46 46.5 47 47.5 48 48.5 49

Baseline: LTE RandomWalk (LexSim)

RandomWalk (TopicSim)

Two-Layer MRRW-BP

Two-Layer MRRW-WBP

(LexSim)

Two-Layer MRRW-WBP

(TopicSim)

ROUGE-1 (Manual)

Baseline: LTE RandomWalk (LexSim)

RandomWalk (TopicSim)

Two-Layer MRRW-BP

Two-Layer MRRW-WBP

(LexSim)

Two-Layer MRRW-WBP

(TopicSim)

ROUGE-L (Manual)

參考文獻

相關文件

Between-Layer Relation via Hidden Parameters Given a set of utterances {U i , i = 1, 2, ..., |U |} and a set of speakers {S j , j = 1, 2, |S|}, where a speaker node in the graph

 Construct a graph to represent utterances in the document (node: utterance, edge: weighted by topical/lexical similarity) Use the graph to re-compute the importance.

The C 1s and O 1s core-level XPS spectra indicate the for- mation of a hybridized layer at the OSC/FM interface while the application of a thin AlO x layer prevents the

Physical Layer Link Layer Network Layer Transport Layer Application Layer.

Physical Layer Link Layer Network Layer Transport Layer Application Layer..

Each unit in hidden layer receives only a portion of total errors and these errors then feedback to the input layer.. Go to step 4 until the error is

 上層 (network layer) 有東西要傳的時候,會將準備好 的封包放進 buffer

如果今天我們想要使用 CSMA/CD 作為台灣超級網路的 MAC protocol ,並使用 1000 BaseT Ethernet 作為 Link layer 的協定.. ,此網路包含了台北、新竹、台中、台南、高雄、台東、花 蓮等城市