I recently got my Bachelor degree from National Taiwan University (NTU). In NTU, I worked with Prof. Chih-Jen Lin’s on large-scale optimization and meta-learning algorithms. Due to my background, I believe that good optimization techniques to solve convex model fast is an important key to achieve high accuracy in many application because we can don’t have to worry too much about the models’ performance and focusing on data itself.

Noticing that the machine learning society lacks software on various kinds of algorithms that may be beneficial to our daily life, I am now developing pieces of tools that compile various state-of-the-art algorithms performing well in many data or contests. One of these pieces is lib-GUNDAM.

Methods I tried

I entered the competition on Jan 30, 2011. That time, I had a submission just trying to make sure whether my lib-GUNDAM performs well on data sets other than my experimental sets. It ended up that I got 10th in my first submission. I only expanded categorical features,
trained it with Linear-SVM and submit its decision values on the data set.

Then, I used lib-GUNDAM to do data scaling and perform parameter search on linear-SVM using the training data set. Shortly, I rose to about 7th place.

I somehow got stuck here and I decided to investigate other classifiers. I tried the tree bagger in Matlab, but I found it ran too low on the data. I thought I did not have time to wait for it. Somehow, I noticed that L2 logistic regression(LR) performs better than L1-SVM in most case on the data set. Thus, I switched to using it afterwards. Fortunately, my friend Hsiang-Fu Yu made a study months ago about training L2 LR in short time and incorporated it into LIBLINEAR. By using the software, I conducted parameter search very fast. (Search for positive instance penalty weight and negative instance penalty weight.)

Also I found that explicit 2-degree polynomial expansion (bi-gram expansion), worked well on the data. I began to used it before training on all data after wards,

I rose to 4th place by using LR on best cross-validation parameters and using 2-degree polynomial expansion.

Then, I had about two days left. I thought I did not have time to train other classifiers. Two ideas came to my mind:
1. to boost the L2 LR or
2. try to add more features.

Thus, I began to try random subspace methods and adaboost to boost the performance of L2 LR. Random subspace methods usually needs diverse learners, but I did not have fast learners, so used only liblinear, which might not be diverse enough. In my expectation, it did not work.

Adaboost boosted my learners very trivially.

In time, I began to visit the forums to check whatever had been discussed. I noticed two things :
1. training and testing data have overlap instances and
2. the contest holder seemed to suggest contestants to investigate user-oriented features.

Therefore, I hard-coded labels to those testing instances which occurred in the training data. Also, I decided to add features to users. Since we have five features

* DependencyCount

* SuggestionCount

* ImportCount

* ViewsIncluding

* PackagesMaintaining for packages

I added for each user five feature by averaging the individual features his packages own. For example, for DependencyCount, for user i, we add a feature userDependencyCount to the user by sum_{[i installed j]==1} DependencyCount_j / sum_{[i installed j]==1}. This yielded very good performance and rose me to the third place.

8 hours before the contest ended I came up with the idea that recommendation system problems could be viewed as link prediction problems. Thus, I tried to add some link prediction features. Due to time constraint, I added only preferential attachment. Although it worked better on my cross validation, it did not work better than the previous model. I thought, I might be utilizing too much graph information. (I want to prediction if there is a link between node a and b, but I used the information regrading links between a and b. I think is a kind of

Then, my submission quota was full, I did not do further experiments although I was to boost L2 LR again. I ended up in the 3rd place.

Tools I used

1. lib-GUNDAM (http://www.csie.ntu.edu.tw/~b95028/software/lib-gundam/index.php) for feature preprocessing

2. LIBLINEAR (www.csie.ntu.edu.tw/~cjlin/liblinear/) for training fast L2 LR

3. LIBLINEAR with instance weight (http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/#weights_for_data_instances)

4. for training adaboost because adaboost needs base learners able to take instance weight into account when learning

5. hi-ensemble (http://www.csie.ntu.edu.tw/~b95028/software/hi-ensemble/index.php) for random subspace method

6. adaboost

For algorithm details and experimental statistics, please refer to

Originally published at blog.kaggle.com on February 12, 2011.

How I did it: Ming-Hen Tsai on finishing third in the R competition was originally published in Kaggle Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

For more details click here(Source link)

Leave a Reply

Your email address will not be published.