Predict the Pitch | Baseball Research

PROBLEM STATEMENT :

To predict the pitch type, given various measurements of the pitch.

BACKGROUND :

This dataset was obtained from a national baseball team, as part of a Predictive Modeling Challenge. The objective was to predict the Pitch_Type, given various other parameters, like Start speed, height, angles etc. It’s a classification problem. I have used Random Forest to build the model.

Full Disclosure: I knew nothing about baseball before working on this project ! Before this, pitch meant the Cricket pitch for me! I’m a Cricket lover, just like any Indian out there, so I was wondering why would someone want to predict the pitch type when the pitch curators have carefully prepared the pitch and we would be analyzing whether it’s spin friendly or whether the fast bowlers would like it.

But apparently the meaning changes drastically from one sport to another, and pitch prediction in Baseball involves very interesting Data Analytics. Now, let’s get to the interesting part.

KNOWN VARIABLES:

  • First Name of the Pitcher
  • Last Name of the Pitcher
  • Mlb_id
  • ab_id
  • pitch_id
  • start_speed
  • x0
  • z0
  • px
  • pz
  • pfx_x
  • pfx_z
  • stand
  • inning
  • height
  • count
  • spinrateND
  • spindirND
  • vxf
  • vzf
  • xangle
  • zangle

The meaning of these variables can be found here.

TARGET VARIABLE:

  • pitch_type

FEATURE ENGINEERING:

The following variables that I engineered were found useful, since they increased the prediction accuracy.

  • speed                 : start_speed * vxf * vzf
  • stspeed_z0       : start_speed * z0
  • stspeed_x0       : start_speed * x0
  • stspeed_ht       : start_speed * height
  • stspped_z0.x0 : start_speed * z0 * x0
  • z0.x0                  : z0 * x0

DATA SET SPLIT FOR TRAINING AND TESTING

The training dataset was split into 70 : 30 ratio, with 70% of training set used for training and 30% used for testing and then evaluating the predicted results with the actual values.

NO. OF MODELS BUILT AND TESTED:

Machine Learning Algorithm used: RANDOM FOREST

Programming Language used to build the model : R

12 models were built and tested in total, and the top three models were as follows:

Model 9:   OOB (Out of Bag) Error rate : 20.87 %

Model 11: OOB (Out of Bag) Error rate : 21.04 %

Model 12: OOB (Out of Bag) Error rate : 21.14 %

I used 10 fold cross validation to test the top 2 models Model 9 and Model 11 again.

After 10 fold cross validation,

Accuracy of Model 9:  79.33 %

Accuracy of Model 11: 78.66 %

Hence Model 9 turned out to be the best model and was used to train 100 % of the training dataset and pitch_type in the Testing dataset was predicted.

MOST IMPORTANT VARIABLES:

MOST IMPORTANT VARIABLE: spindirND

The below values show the magnitude by which the model will suffer if the particular variable is removed, hence showing its importance. I have indicated the new feature variables that we have added in blue, indicating the importance of feature engineering.

spindirND 100
start_speed 88.197
pfx_x 84.467
pfx_z 81.521
zangle 58.332
stspeed_ht 55.609
spinrateND 49.47
xangle 35.636
speed 33.435
stspeed_z0 33.082
z0 20.685
pz 20.36
px 19.347
count 18.982
stspeed_x0 18.3
stspped_z0.x0 17.064
x0 16.876
z0.x0 16.743
height 12.408
inning 9.932

VISUALIZATIONS INDICATING VARIABLE IMPORTANCE AND PREDICTIVE POWER:

The predictive power of a particular variable is high when there is a discernable separation between the regions of the different pitch_types. The more distinct and separately clustered they are, the better predictive power they have.

Predictive power of the variable spindirND : VERY HIGH

Reason: The clusters of different pitch_types are in distinctly different regions in X-axis, clearly being separated from each other, thereby creating a separation between pitch_types and increasing the predictive power of the respective variable immensely (here it’s spindirND).

Rplot2

Predictive power of the variable start_speed: HIGH

Reason: The clusters of different pitch_type are in distinctly different regions in X-axis, thereby creating a separation between pitch_types and increasing the predictive power of the respective variable (here it’s start_speed)

Rplot1

Clustering of pitch_type when plotted between spindirND and start_speed:

The clear distinction between different pitch_type suggests huge predictive power between the two variables spindirND and start_speed.

Rplot3

Clustering of pitch_type when plotted between spindirND and pfx_x:

The distinction between different pitch_type suggests good predictive power between the two variables spindirND and pfx_x, but not as good as between spindirND and start_speed.

Rplot4.png

Tableau Visualization for important variables Vs a variable that doesn’t provide much insight (Eg. Height)

Screen Shot 2017-04-12 at 11.58.15 AM

The R Code for the Model building, Model Selection and Cross validation is as follows:


library(caret)
library(randomForest)
library(doSNOW)
library(e1071)
library(ggplot2)
library(directlabels)


#Get the data
train.data <- read.csv("train.csv")
test.data <- read.csv("test.csv")

#Explore the data
str(train.data)
levels(train.data$pitch_type)

# NULL the identifiers
train.data$last <- NULL
train.data$first <- NULL
train.data$mlbid <- NULL
train.data$ab_id <- NULL
train.data$pitch_id <- NULL

test.data$last <- NULL
test.data$first <- NULL
test.data$mlbid <- NULL
test.data$ab_id <- NULL
test.data$pitch_id <- NULL


train.index <- createDataPartition(train.data$pitch_type, p = .7, list = FALSE)
train <- train.data[ train.index,]
test <- train.data[-train.index,]

rf.train.1 <- train
y.label <- train$pitch_type
rf.train.1$pitch_type <- NULL
set.seed(1000)
rf.1 <- randomForest(x = rf.train.1, y = y.label, importance = TRUE, ntree = 500)
rf.1
varImpPlot(rf.1)
#OOB estimate of error rate: 21.72%

rf.train.2 <- train
rf.train.2$pitch_type <- NULL
rf.train.2$speed <- rf.train.2$start_speed * (rf.train.2$vxf + rf.train.2$vzf)
rf.train.2$start_speed <- NULL
rf.train.2$vxf <- NULL
rf.train.2$vzf <- NULL

set.seed(1000)
rf.2 <- randomForest(x = rf.train.2, y = y.label, importance = TRUE, ntree = 500)
rf.2
varImpPlot(rf.2)
#OOB estimate of error rate: 26.36%

rf.train.3 <- train
rf.train.3$pitch_type <- NULL
rf.train.3$speed <- rf.train.3$start_speed * (rf.train.3$vxf + rf.train.3$vzf)
rf.train.3$vxf <- NULL
rf.train.3$vzf <- NULL

set.seed(1000)
rf.3 <- randomForest(x = rf.train.3, y = y.label, importance = TRUE, ntree = 500)
rf.3
varImpPlot(rf.3)
#OOB estimate of error rate: 21.78%

rf.train.4 <- train
rf.train.4$pitch_type <- NULL
rf.train.4$speed <- rf.train.4$start_speed * rf.train.4$vxf * rf.train.4$vzf
rf.train.4$vxf <- NULL
rf.train.4$vzf <- NULL

set.seed(1000)
rf.4 <- randomForest(x = rf.train.4, y = y.label, importance = TRUE, ntree = 500)
rf.4
varImpPlot(rf.4)
#OOB estimate of error rate: 21.63%

rf.train.5 <- train
rf.train.5$pitch_type <- NULL
rf.train.5$speed <- rf.train.5$start_speed * rf.train.5$vxf * rf.train.5$vzf
rf.train.5$vxf <- NULL
rf.train.5$vzf <- NULL
rf.train.5$stspeed_z0 <- rf.train.5$start_speed * rf.train.5$z0


set.seed(1000)
rf.5 <- randomForest(x = rf.train.5, y = y.label, importance = TRUE, ntree = 500)
rf.5
varImpPlot(rf.5)
# OOB estimate of error rate: 21.55%

rf.train.6 <- train
rf.train.6$pitch_type <- NULL

rf.train.6$speed <- rf.train.6$start_speed * rf.train.6$vxf * rf.train.6$vzf
rf.train.6$vxf <- NULL
rf.train.6$vzf <- NULL
rf.train.6$stspeed_z0 <- rf.train.6$start_speed * rf.train.6$z0
rf.train.6$stspeed_x0 <- rf.train.6$start_speed * rf.train.6$x0


set.seed(1000)
rf.6 <- randomForest(x = rf.train.6, y = y.label, importance = TRUE, ntree = 500)
rf.6
varImpPlot(rf.6)

# OOB estimate of error rate: 21.37%

rf.train.7 <- train
rf.train.7$pitch_type <- NULL

rf.train.7$speed <- rf.train.7$start_speed * rf.train.7$vxf * rf.train.7$vzf
rf.train.7$vxf <- NULL
rf.train.7$vzf <- NULL
rf.train.7$stspeed_z0 <- rf.train.7$start_speed * rf.train.7$z0
rf.train.7$stspeed_x0 <- rf.train.7$start_speed * rf.train.7$x0
rf.train.7$stspeed_ht <- rf.train.7$start_speed * rf.train.7$height

set.seed(1000)
rf.7 <- randomForest(x = rf.train.7, y = y.label, importance = TRUE, ntree = 500)
rf.7
varImpPlot(rf.7)

# OOB estimate of error rate: 21.24%

rf.train.8 <- train
rf.train.8$pitch_type <- NULL

rf.train.8$speed <- rf.train.8$start_speed * rf.train.8$vxf * rf.train.8$vzf
rf.train.8$vxf <- NULL
rf.train.8$vzf <- NULL

rf.train.8$stspeed_z0 <- rf.train.8$start_speed * rf.train.8$z0
rf.train.8$stspeed_x0 <- rf.train.8$start_speed * rf.train.8$x0
rf.train.8$stspeed_ht <- rf.train.8$start_speed * rf.train.8$height
rf.train.8$stspped_z0.x0 <- rf.train.8$start_speed * rf.train.8$z0 * rf.train.8$x0

set.seed(1000)
rf.8 <- randomForest(x = rf.train.8, y = y.label, importance = TRUE, ntree = 500)
rf.8
varImpPlot(rf.8)

# OOB estimate of error rate: 21.18%

rf.train.9 <- train
rf.train.9$pitch_type <- NULL

rf.train.9$speed <- rf.train.9$start_speed * rf.train.9$vxf * rf.train.9$vzf
rf.train.9$vxf <- NULL
rf.train.9$vzf <- NULL

rf.train.9$stspeed_z0 <- rf.train.9$start_speed * rf.train.9$z0
rf.train.9$stspeed_x0 <- rf.train.9$start_speed * rf.train.9$x0
rf.train.9$stspeed_ht <- rf.train.9$start_speed * rf.train.9$height
rf.train.9$stspped_z0.x0 <- rf.train.9$start_speed * rf.train.9$z0 * rf.train.9$x0
rf.train.9$z0.x0 <- rf.train.9$z0 * rf.train.9$x0

set.seed(1000)
rf.9 <- randomForest(x = rf.train.9, y = y.label, importance = TRUE, ntree = 500)
rf.9
varImpPlot(rf.9)

# OOB estimate of error rate: 20.87%

rf.train.10 <- train
rf.train.10$pitch_type <- NULL

rf.train.10$speed <- rf.train.10$start_speed * rf.train.10$vxf * rf.train.10$vzf
rf.train.10$pfx_z.zangle <- rf.train.10$pfx_z * rf.train.10$zangle

rf.train.10$vxf <- NULL
rf.train.10$vzf <- NULL

rf.train.10$stspeed_z0 <- rf.train.10$start_speed * rf.train.10$z0
rf.train.10$stspeed_x0 <- rf.train.10$start_speed * rf.train.10$x0
rf.train.10$stspeed_ht <- rf.train.10$start_speed * rf.train.10$height
rf.train.10$stspped_z0.x0 <- rf.train.10$start_speed * rf.train.10$z0 * rf.train.10$x0
rf.train.10$z0.x0 <- rf.train.10$z0 * rf.train.10$x0


set.seed(1000)
rf.10 <- randomForest(x = rf.train.10, y = y.label, importance = TRUE, ntree = 500)
rf.10
varImpPlot(rf.10)

# OOB estimate of error rate: 21.16%

rf.train.11 <- train
rf.train.11$pitch_type <- NULL

rf.train.11$speed <- rf.train.11$start_speed * rf.train.11$vxf * rf.train.11$vzf
rf.train.11$px.pz <- rf.train.11$px * rf.train.11$pz

rf.train.11$vxf <- NULL
rf.train.11$vzf <- NULL

rf.train.11$stspeed_z0 <- rf.train.11$start_speed * rf.train.11$z0
rf.train.11$stspeed_x0 <- rf.train.11$start_speed * rf.train.11$x0
rf.train.11$stspeed_ht <- rf.train.11$start_speed * rf.train.11$height
rf.train.11$stspped_z0.x0 <- rf.train.11$start_speed * rf.train.11$z0 * rf.train.11$x0
rf.train.11$z0.x0 <- rf.train.11$z0 * rf.train.11$x0


set.seed(1000)
rf.11 <- randomForest(x = rf.train.11, y = y.label, importance = TRUE, ntree = 500)
rf.11
varImpPlot(rf.11)

# OOB estimate of error rate: 21.04%

rf.train.12 <- train
rf.train.12$pitch_type <- NULL

rf.train.12$speed <- rf.train.12$start_speed * rf.train.12$vxf * rf.train.12$vzf
rf.train.12$vxf <- NULL
rf.train.12$vzf <- NULL

rf.train.12$stspeed_z0 <- rf.train.12$start_speed * rf.train.12$z0
rf.train.12$stspeed_x0 <- rf.train.12$start_speed * rf.train.12$x0
rf.train.12$stspeed_ht <- rf.train.12$start_speed * rf.train.12$height
rf.train.12$stspped_z0.x0 <- rf.train.12$start_speed * rf.train.12$z0 * rf.train.12$x0
rf.train.12$z0.x0 <- rf.train.12$z0 * rf.train.12$x0
rf.train.12$stspeed.inning <- rf.train.12$start_speed * rf.train.12$inning

set.seed(1000)
rf.12 <- randomForest(x = rf.train.12, y = y.label, importance = TRUE, ntree = 500)
rf.12
varImpPlot(rf.12)

# OOB estimate of error rate: 21.14%


test.label <- test$pitch_type

#Predict on the remaining test set using top 3 models

#Model 9: best OOB : 20.87 %

test.9 <- test
test.9$pitch_type <- NULL

test.9$speed <- test.9$start_speed * test.9$vxf * test.9$vzf
test.9$vxf <- NULL
test.9$vzf <- NULL

test.9$stspeed_z0 <- test.9$start_speed * test.9$z0
test.9$stspeed_x0 <- test.9$start_speed * test.9$x0
test.9$stspeed_ht <- test.9$start_speed * test.9$height
test.9$stspped_z0.x0 <- test.9$start_speed * test.9$z0 * test.9$x0
test.9$z0.x0 <- test.9$z0 * test.9$x0

pred9 <- predict(rf.9, newdata = test.9)
tab9 <- table(pred9, test.label)
confusionMatrix(tab9)
#Accuracy : 0.7914, Kappa : 0.7306

#Model 11: second best OOB : 21.04 %

test.11 <- test
test.11$pitch_type <- NULL

test.11$speed <- test.11$start_speed * test.11$vxf * test.11$vzf
test.11$px.pz <- test.11$px * test.11$pz

test.11$vxf <- NULL
test.11$vzf <- NULL

test.11$stspeed_z0 <- test.11$start_speed * test.11$z0
test.11$stspeed_x0 <- test.11$start_speed * test.11$x0
test.11$stspeed_ht <- test.11$start_speed * test.11$height
test.11$stspped_z0.x0 <- test.11$start_speed * test.11$z0 * test.11$x0
test.11$z0.x0 <- test.11$z0 * test.11$x0

pred11 <- predict(rf.11, newdata = test.11)
tab11 <- table(pred11, test.label)
confusionMatrix(tab11)
#Accuracy : 0.7914, Kappa : 0.7307

#Model 12: best OOB : 21.14 %

test.12 <- test
test.12$pitch_type <- NULL

test.12$speed <- test.12$start_speed * test.12$vxf * test.12$vzf
test.12$vxf <- NULL
test.12$vzf <- NULL

test.12$stspeed_z0 <- test.12$start_speed * test.12$z0
test.12$stspeed_x0 <- test.12$start_speed * test.12$x0
test.12$stspeed_ht <- test.12$start_speed * test.12$height
test.12$stspped_z0.x0 <- test.12$start_speed * test.12$z0 * test.12$x0
test.12$z0.x0 <- test.12$z0 * test.12$x0
test.12$stspeed.inning <- test.12$start_speed * test.12$inning

pred12 <- predict(rf.12, newdata = test.12)
tab12 <- table(pred12, test.label)
confusionMatrix(tab12)

#Accuracy : 0.7871, Kappa : 0.725

#Model 9 and Model 11 look the best and are very close in accuracy

#Cross Validation of model 9



#10 fold cross validation repeated 10 times

#Leverage caret to create 100 folds

set.seed(1000)
cv.10.folds <- createMultiFolds(y.label, k = 10, times = 10)

#Set up Caret's trainControl object
ctrl.1 <- trainControl(method = "repeatedcv", number = 10, repeats = 10, index = cv.10.folds)

#Set doSNOW package for multi-core training.
cl <- makeCluster(2, type = "SOCK")
registerDoSNOW(cl)

#Set seed for reproducability and train

set.seed(1000)
rf.9.cv1.10 <- train(x = rf.train.9, y = y.label, method = "rf", tuneLength = 2, ntree = 500, trControl = ctrl.1)

#Shutdown cluster
stopCluster(cl)

rf.9.cv1.10

# Accuracy - 0.7850699, Kappa - 0.7220195

pred9.cv <- predict(rf.9.cv1.10, newdata = test.9)
tab9.cv <- table(pred9.cv, test.label)
confusionMatrix(tab9.cv)

# Accuracy : 0.7933 Kappa : 0.7325

#Cross Validation of model 11
#Set doSNOW package for multi-core training.
cl <- makeCluster(2, type = "SOCK")
registerDoSNOW(cl)

#Set seed for reproducability and train

set.seed(1000)
rf.11.cv2.10 <- train(x = rf.train.11, y = y.label, method = "rf", tuneLength = 2, ntree = 500, trControl = ctrl.1)

#Shutdown cluster
stopCluster(cl)

rf.11.cv2.10

# Accuracy - 0.7836957, Kappa - 0.7200600

pred11.cv <- predict(rf.11.cv2.10, newdata = test.11)
tab11.cv <- table(pred11.cv, test.label)
confusionMatrix(tab11.cv)

# Accuracy : 0.7866 Kappa : 0.7235

##### Model 9 performs the best during cross validation

#Now train the entire training dataset on Model 9 using 10 fold cross validation
# Feature Engineering on whole training dataset

train.label <- train.data$pitch_type

train.9 <- train.data
train.9$pitch_type <- NULL

train.9$speed <- train.9$start_speed * train.9$vxf * train.9$vzf
train.9$vxf <- NULL
train.9$vzf <- NULL

train.9$stspeed_z0 <- train.9$start_speed * train.9$z0
train.9$stspeed_x0 <- train.9$start_speed * train.9$x0
train.9$stspeed_ht <- train.9$start_speed * train.9$height
train.9$stspped_z0.x0 <- train.9$start_speed * train.9$z0 * train.9$x0
train.9$z0.x0 <- train.9$z0 * train.9$x0

set.seed(1000)
train.9.cv.10.folds <- createMultiFolds(train.label, k = 10, times = 10)

#Set up Caret's trainControl object
ctrl.2 <- trainControl(method = "repeatedcv", number = 10, repeats = 10, index = train.9.cv.10.folds)

cl <- makeCluster(2, type = "SOCK")
registerDoSNOW(cl)

#Set seed for reproducability and train

set.seed(1000)
model.9.cv1.10 <- train(x = train.9, y = train.label, method = "rf", tuneLength = 2, ntree = 500, trControl = ctrl.2)

#Shutdown cluster
stopCluster(cl)

model.9.cv1.10
varImp(model.9.cv1.10)


#Final Model Accuracy : 0.7920948, Kappa : 0.7314408

#Predict the pitch type in our final test data

test.9 <- test.data
test.9$speed <- test.9$start_speed * test.9$vxf * test.9$vzf
test.9$vxf <- NULL
test.9$vzf <- NULL
test.9$stspeed_z0 <- test.9$start_speed * test.9$z0
test.9$stspeed_x0 <- test.9$start_speed * test.9$x0
test.9$stspeed_ht <- test.9$start_speed * test.9$height
test.9$stspped_z0.x0 <- test.9$start_speed * test.9$z0 * test.9$x0
test.9$z0.x0 <- test.9$z0 * test.9$x0


#apply the best model to test set

#Make predictions
model9.cv.preds <- predict(model.9.cv1.10, newdata = test.9)
table(model9.cv.preds)

#Write a csv file to submit in Kaggle

final.test <- read.csv("test.csv")
final.test$pitch_type = model9.cv.preds
write.csv(final.test, file = "Pitch_type_Predictions.csv", row.names = FALSE)

#Visualizations
train.9full <- train.9
train.9full$pitch_type <- train.label



ggplot(train.9full, aes(start_speed, z0, color = pitch_type)) +
geom_point()

ggplot(train.9full, aes(start_speed, z0)) +
geom_point() +
facet_wrap(~pitch_type)

ggplot(train.9full, aes(start_speed, colour = pitch_type)) +
geom_freqpoly(binwidth = 0.5)

ggplot(train.9full, aes(start_speed, fill = pitch_type)) +
geom_histogram(binwidth = 0.5) +
facet_wrap(~pitch_type, ncol = 2)

ggplot(train.9full, aes(spindirND, fill = pitch_type)) +
geom_histogram(binwidth = 0.5) +
facet_wrap(~pitch_type, ncol = 2)

ggplot(train.9full, aes(z0, colour = pitch_type)) +
geom_freqpoly(binwidth = 0.5)

ggplot(train.9full, aes(z0, fill = pitch_type)) +
geom_histogram(binwidth = 0.5) +
facet_wrap(~pitch_type, ncol = 1)

ggplot(train.9full, aes(speed, colour = pitch_type)) +
geom_freqpoly(binwidth = 0.5)

ggplot(train.9full, aes(speed, fill = pitch_type)) +
geom_histogram(binwidth = 0.5) +
facet_wrap(~pitch_type, ncol = 1)

ggplot(train.9full, aes(stspeed_z0, colour = pitch_type)) +
geom_freqpoly(binwidth = 0.5)

ggplot(train.9full, aes(stspeed_z0, fill = pitch_type)) +
geom_histogram(binwidth = 0.5) +
facet_wrap(~pitch_type, ncol = 1)

ggplot(train.9full, aes(stspeed_x0, colour = pitch_type)) +
geom_freqpoly(binwidth = 0.5)

ggplot(train.9full, aes(stspeed_x0, fill = pitch_type)) +
geom_histogram(binwidth = 0.5) +
facet_wrap(~pitch_type, ncol = 1)

ggplot(train.9full, aes(stspeed_ht, colour = pitch_type)) +
geom_freqpoly(binwidth = 0.5)

ggplot(train.9full, aes(stspeed_ht, fill = pitch_type)) +
geom_histogram(binwidth = 0.5) +
facet_wrap(~pitch_type, ncol = 1)

ggplot(train.9full, aes(stspped_z0.x0, colour = pitch_type)) +
geom_freqpoly(binwidth = 0.5)

ggplot(train.9full, aes(stspped_z0.x0, fill = pitch_type)) +
geom_histogram(binwidth = 0.5) +
facet_wrap(~pitch_type, ncol = 1)

ggplot(train.9full, aes(z0.x0, colour = pitch_type)) +
geom_freqpoly(binwidth = 0.5)

ggplot(train.9full, aes(z0.x0, fill = pitch_type)) +
geom_histogram(binwidth = 0.5) +
facet_wrap(~pitch_type, ncol = 1)

ggplot(train.9full, aes(spindirND, colour = pitch_type)) +
geom_freqpoly(binwidth = 0.5)

ggplot(train.9full, aes(spindirND, fill = pitch_type)) +
geom_histogram(binwidth = 0.5) +
facet_wrap(~pitch_type, ncol = 1)



ggplot(train.9full, aes(spindirND, start_speed, colour = pitch_type)) +
geom_point()

ggplot(train.9full, aes(spindirND, start_speed, colour = pitch_type)) + geom_point(show.legend = FALSE) +
directlabels::geom_dl(aes(label = pitch_type), method = "smart.grid")

ggplot(train.9full, aes(spindirND, pfx_x, colour = pitch_type)) + geom_point(show.legend = FALSE) +
directlabels::geom_dl(aes(label = pitch_type), method = "smart.grid")

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s