Sebastian Raschka, 2015
Training a model for movie review classification¶
In [1]:
import numpy as np
from nltk.stem.porter import PorterStemmer
import re
from nltk.corpus import stopwords
stop = stopwords.words('english')
porter = PorterStemmer()
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) + ' '.join(emoticons).replace('-', '')
text = [w for w in text.split() if w not in stop]
tokenized = [porter.stem(w) for w in text]
return tokenized
def stream_docs(path):
with open(path, 'r') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
In [2]:
next(stream_docs(path='../../data/movie_data.csv'))
Out[2]:
('"In 1974, the teenager Martha Moxley (Maggie Grace) moves to the high-class area of Belle Haven, Greenwich, Connecticut. On the Mischief Night, eve of Halloween, she was murdered in the backyard of her house and her murder remained unsolved. Twenty-two years later, the writer Mark Fuhrman (Christopher Meloni), who is a former LA detective that has fallen in disgrace for perjury in O.J. Simpson trial and moved to Idaho, decides to investigate the case with his partner Stephen Weeks (Andrew Mitchell) with the purpose of writing a book. The locals squirm and do not welcome them, but with the support of the retired detective Steve Carroll (Robert Forster) that was in charge of the investigation in the 70\'s, they discover the criminal and a net of power and money to cover the murder.<br /><br />""Murder in Greenwich"" is a good TV movie, with the true story of a murder of a fifteen years old girl that was committed by a wealthy teenager whose mother was a Kennedy. The powerful and rich family used their influence to cover the murder for more than twenty years. However, a snoopy detective and convicted perjurer in disgrace was able to disclose how the hideous crime was committed. The screenplay shows the investigation of Mark and the last days of Martha in parallel, but there is a lack of the emotion in the dramatization. My vote is seven.<br /><br />Title (Brazil): Not Available"',
1)
In [3]:
def get_minibatch(doc_stream, size):
docs, y = [], []
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
return docs, y
In [4]:
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
clf = SGDClassifier(loss='log', random_state=1, n_iter=1)
doc_stream = stream_docs(path='../../data/movie_data.csv')
In [5]:
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
0% 100% [##############################] | ETA[sec]: 0.000 Total time elapsed: 163.780 sec
In [6]:
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('Accuracy: %.3f' % clf.score(X_test, y_test))
Accuracy: 0.873
In [7]:
clf = clf.partial_fit(X_test, y_test)
Serializing fitted scikit-learn estimators¶
After we trained the logistic regression model as shown above, we know save the classifier along woth the stop words, Porter Stemmer, and HashingVectorizer as serialized objects to our local disk so that we can use the fitted classifier in our web application later.
In [8]:
import pickle
import os
dest = os.path.join('movieclassifier', 'pkl_objects')
if not os.path.exists(dest):
os.mkdir(dest)
pickle.dump(stop, open(os.path.join(dest, 'stopwords.pkl'), 'wb'))
pickle.dump(porter, open(os.path.join(dest, 'porterstemmer.pkl'), 'wb'))
pickle.dump(vect, open(os.path.join(dest, 'vectorizer.pkl'), 'wb'))
pickle.dump(clf, open(os.path.join(dest, 'classifier.pkl'), 'wb'))
After executing the preceeding code cells, we can now restart the IPython notebook kernel to check if the objects were serialized correctly.
In [1]:
import pickle
import re
import os
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) + ' '.join(emoticons).replace('-', '')
text = [w for w in text.split() if w not in stop]
tokenized = [porter.stem(w) for w in text]
return text
dest = os.path.join('movieclassifier', 'pkl_objects')
stop = pickle.load(open(os.path.join(dest, 'stopwords.pkl'), 'rb'))
porter = pickle.load(open(os.path.join(dest, 'porterstemmer.pkl'), 'rb'))
vect = pickle.load(open(os.path.join(dest, 'vectorizer.pkl'), 'rb'))
clf = pickle.load(open(os.path.join(dest, 'classifier.pkl'), 'rb'))
In [22]:
import numpy as np
label = {0:'negative', 1:'positive'}
example = ['I love this movie']
X = vect.transform(example)
print('Prediction: %s\nProbability: %.2f%%' %\
(label[clf.predict(X)[0]], np.max(clf.predict_proba(X))*100))
Prediction: positive Probability: 91.56%
Creating an SQLite Database¶
In [17]:
import sqlite3
import os
dest = os.path.join('movieclassifier', 'reviews.sqlite')
conn = sqlite3.connect(dest)
c = conn.cursor()
c.execute('CREATE TABLE review_db (review TEXT, sentiment INTEGER, date TEXT)')
example1 = 'I love this movie'
c.execute("INSERT INTO review_db (review, sentiment, date) VALUES (?, ?, DATETIME('now'))", (example1, 1))
example2 = 'I disliked this movie'
c.execute("INSERT INTO review_db (review, sentiment, date) VALUES (?, ?, DATETIME('now'))", (example2, 0))
conn.commit()
conn.close()
In [18]:
conn = sqlite3.connect(dest)
c = conn.cursor()
c.execute("SELECT * FROM review_db WHERE date BETWEEN '2015-01-01 10:10:10' AND DATETIME('now')")
results = c.fetchall()
conn.close()
In [19]:
print(results)
[('I love this movie', 1, '2015-06-02 16:02:12'), ('I disliked this movie', 0, '2015-06-02 16:02:12')]