HMM and Viterbi Algorithm (machine that writes like a WSJ journalist!)

The training and testing files can be found in : WSJ database

The source code (using ipython notebook) can be found at this link: HMMonWSJ

In this post we discuss how we train a HMM to write as an WSJ journalist !

Here we train a first-order HMM for POS tagging based on a set of training data. This allows us to estimate probabilities and train the HMM. We will then use the  trained model to automatically assign tags to testing data. Finally you will use the provided ground-truth tag assignment for the testing data to evaluate and report your model performance.

From the training data, we need to estimate both emission probabilities and transition probabilities.

  • Emission probabilities: The emission probabilities are calculated as P(w|t) = \frac{Count(w,t)}{Count(t)}, where w stands for the word, and t stands for tag.
  • Transition probabilities: The vocabulary size for all the tags is 45. We implement a function that returns bigrams of tags P(t_i|t_{i-1}) = \frac{Count(t_{i-1}, t_i)}{Count(t_{i-1})}. A better way of doing this may be using a smoothing technique but we will not discuss this in the current post.
  • To calculate the transition probabilities (i.e., bigrams), for each sentence, we add two tags to represent sentence boundaries, i.e. t_{-1} = START, and t_{n+1} = END

The code that does the counting is as follows: (note that we count the count(w,t) in word_table and count(t) in tag_table)

with open('./wsj1-18.training','r') as f:
    for line in f:                 # a new sentence starts
        current_tag = 'START'
        old_tag = 'START'
        current_word = ''
        tag_table[current_tag] = tag_table[current_tag]+1
        
        tag_flag = 0               # after reading a tag, next is a word
        for word in line.split():
            if tag_flag == 0:      # this is a word!
                current_word = word
                tag_flag = 1
            elif tag_flag == 1:    # this is a tag!
                current_tag = word                
                if current_tag in tag_table:
                    tag_table[current_tag] += 1
                else:
                    tag_table[current_tag] = 1
                
                if current_word in word_table:
                    if current_tag in word_table[current_word]:
                        word_table[current_word][current_tag] +=1
                    else:
                        word_table[current_word][current_tag] = 1
                else:
                    word_table[current_word] = {}
                    word_table[current_word][current_tag] = 1
                
                # update tag_transition
                if old_tag in tag_transition:
                    if current_tag in tag_transition[old_tag]:
                        tag_transition[old_tag][current_tag] +=1
                    else:
                        tag_transition[old_tag][current_tag] = 1
                else:
                    tag_transition[old_tag] = {}
                    tag_transition[old_tag][current_tag] = 1                
                
                old_tag = current_tag
                tag_flag = 0
                                                
        current_tag = 'END'
        current_word = ''
        tag_table[current_tag] = tag_table[current_tag]+1
        if old_tag in tag_transition:
            if current_tag in tag_transition[old_tag]:
                tag_transition[old_tag][current_tag] +=1
            else:
                tag_transition[old_tag][current_tag] = 1
        else:
            tag_transition[old_tag] = {}
            tag_transition[old_tag][current_tag] = 1

f.close()          

Now there are uncommon words in the training set. Words occurring less than five times in the training data should be mapped to the word token UNKA. The following code does this job:

word_table['UNKA'] = {}
remove_word_list = []
for current_tag in tag_table:
    word_table['UNKA'][current_tag] = 0

for current_word, current_tag_list in word_table.iteritems():
    sum = 0
    for current_tag, c_word_tag in current_tag_list.iteritems():
        sum += c_word_tag
    if sum < 5:           # we have an uncommon word
        for current_tag, c_word_tag in current_tag_list.iteritems():
            word_table['UNKA'][current_tag] += c_word_tag
        remove_word_list.append(current_word)

for current_word in remove_word_list:
    word_table.pop(current_word, None)

Afterwards we can calculate the emission and transition probabilities at ease:

import copy

Pwt = copy.deepcopy(word_table)  # we want to keep the count in word_table unchanged

for current_word, current_tag_list in word_table.iteritems():
    for current_tag, c_word_tag in current_tag_list.iteritems(): # c_ stands for count
        c_tag = tag_table[current_tag]
        Pwt[current_word][current_tag] = float(c_word_tag)/float(c_tag)

Pt_transition = copy.deepcopy(tag_transition)
    
for old_tag, next_tag_list in tag_transition.iteritems():
    for next_tag, c_old_current in next_tag_list.iteritems():
        c_tag = tag_table[old_tag]
        Pt_transition[old_tag][next_tag] = float(c_old_current)/float(c_tag)

With the transition and emission probabilities calculated we can use Viterbi algorithm to train the HMM. We will explain Viterbi algorithm in future updates. But for now this is the code:

def viterbi(line, pwt, pt_transition, all_tag):
    result = {}  # result[final_tag: [likelihood, [path]]]
    current_word = line[-1]
    #print('current_word', current_word)
    if line[0]==line[-1]:
        #print('final')
        for current_tag in all_tag:
            a_ij = 0
            if current_tag in pt_transition['START']:
                a_ij = pt_transition['START'][current_tag]
            b_j = 0
            if current_tag in pwt[current_word]:
                b_j = pwt[current_word][current_tag]
            likelyhood =  b_j * a_ij
            path = ['START'] + [current_tag]
            result[current_tag] = [likelyhood, path]
            
    else: 
        v_i_result = viterbi(line[0:-1], pwt, pt_transition, all_tag)
        for current_tag in all_tag:
            #print('current tag', current_tag)
            likelyhood = 0
            path = []
            b_j = 0
            if current_tag in pwt[current_word]:
                b_j = pwt[current_word][current_tag]
            
            for previous_tag in all_tag:
                if previous_tag == 'END':
                    continue
                a_ij = 0
                if current_tag in pt_transition[previous_tag]:
                    a_ij = pt_transition[previous_tag][current_tag]
                if likelyhood < b_j * v_i_result[previous_tag][0] * a_ij:
                    likelyhood = b_j * v_i_result[previous_tag][0] * a_ij
                    path = v_i_result[previous_tag][1]+[current_tag]
            result[current_tag] = [likelyhood, path]
            
    return result

Thus we have our HMM trained and ready to be tested!

The testing code can be found in the original code.

In short we calculate the confusion matrix C[true_label, recognized_label].

Visualization of the confusion matrix is as follows:

Confusion matrix for 47 labels (45 tags+ START + END)

Confusion matrix for 47 labels (45 tags+ START + END)

It generates pretty good results. Testing is performed on 5927 sentences.

 

 

About buttonzzj

Root~
Bookmark the permalink.

Leave a Reply