Data Based Review of Strapless Mio Link HRM

| Comments

Our bodies generate a lot of data - blood pressure, heart rate, amount of glucose in blood, etc. However, the struggle is how to collect data? Until recently, heart rate monitors(HRM) were popular only among practitioners of endurance sports - runners, cyclists, swimmers, etc. Nevertheless, a strapless, wristband heart rate monitor seduces a new category of users - data minded people who are interested in quantified self movement.

Mio Link looks like a watch on your wrist, allowing to wear it around the clock. Have you ever tried to wear chest based HRM outside workout?

Mio on the wrist

The technology behind wristband HRM is quite simple - integrated LEDs beams light into the skin and pulsing volume of blood flow is collected. Wristband HRM might provide better accuracy than chest based HRM, because latter is affected in lower air temperatures. Below you can see data comparison between Mio Link and Garmin Soft Strap from one of my workouts:

Mio Link vs Garmin Soft Strap

There some data discrepancy, but most importantly peaks and valleys are intact. However, sport workouts are limited in time and I wanted to know, how does wristband HRM work around the clock. The second chart shows data recorded during the day. As you can see the error rate (red color indicates potential errors) is much higher. This might be explained by the fact that my movements were not constant or repetitive contrary to running or sleeping.

Heart rate

The third chart shows the data gathered during the night. The data bears some noise as well, but the spikes indicate shift in data blocks and presumably body movements. It would be interesting to know if sleep stages can be extracted from heart rate data.

Night rate

If you plan to use it on daily basis, keep in mind, that the battery last 8-10 hours. It might sound bad, but during the day you have plenty of time windows when you can charge the battery without loosing a lot sensitive data. For example, while you sit in front of computer your heart rate most likely will be low and stable and it is perfect time for charging.

If the idea of wristband HRM sounds appealing, beside Mio link, you can check for Scosche RHYTHM+ as well, which is equivalent of the former.

Offline Garmin Workout Visualization

| Comments

While ago I built a R script for Garmin data visualization and actually used it myself for more than one year. However the solution suffer from few limitation - the output is static and R language installation is necessary.

Here is a new version of visualization and you need only html file which I build and a browser (Firefox, Safari are fine, however Chrome doesn’t allow to load tcx or xml files from the disk).

In order to save the file on the local disk, click right button on the link and then “Save as”. Then copy your workout *.tcx file to the same directory where you saved garmin.html file. On mac OS the path is: /Users/my_username/Library/Application\ Support/Garmin/Devices/RANDOM_NUMBER/History/YOU_WORKOUT.tcx Once you copied the *.tcx file please rename it to work.tcx. Double click on garmin.hml and you will see a visualization similar to one bellow. Bonus? You can see it off-line, without uploading or sharing your workout with the world.

Here is an example

Credit Card Fraud Detection

| Comments

During the last Data Science community meeting in Luxembourg Phd Candidate Alejandro Correa Bahnsen made a presentation on Credit Card Fraud Detection(CCFD). In short - CCFD is just another machine learning problem which is similar to Network Security and Intrusion Detection(NSID) problem, but it has its own obstacles.

From business perspectives - card fraud detection system directly impacts profitability of all credit card operators, therefore are very desirable. The cost of card fraud in the UK alone in 2006 was estimated ~ 500 millions pounds. Assuming, that CCFD system can identify 30% (Churn models at TelComs are able to save that much) of all fraud would lead to 150 millions pound savings a year. Hence, we have a desirable product with price range from 1.5 to 15 millions pounds in the UK. Here is the catch - in any given country there are a few credit card operators, for ex. only CETREL operates in Luxembourg. So, if you want to sell a solution you play all or nothing game.
Additionally, if you wish to build a CCFD system or at least a prototype you most likely missing the data and you won’t get it. Chicken and egg problem.

How does the data set might look? Think of millions rows where less than 0.002% of data are the fraud operations. If you train your model with unadjusted data set, it will predict all future events as normal operations. In order to avoid such behavior you need to throw away most of the data with normal operations and balance data where distribution would be 1% of fraud vs 99% normal or 5% vs 95%. You can play with freely available network intrusion data to get idea how imbalanced data would look like.

Another thing to keep in mind is the size of data set. Conventional wisdom - the more data you have, the better model you can build, but it is not true if you run a real time system, where latency is big deal. In such case you have to think about something similar to map-reduce framework, where you keep only the averages of the variables per client.

CCFD systems work as binary classificators, where the response either fraud or normal operation, meaning that it doesn’t take into account how much fraud cost. Alejandro tries to incorporate loss-profit function, where each operation has its own cost. If you think about his approach - sounds as regression problem to me.

And the last thing - I suppose, it’s worth to try to run unsupervised learning system in parallel. Unsupervised CCFD would issue a lot false alerts at the beginning, however it would considerably improve over time with good feedback from supervised CCFD.

Kaggle Challenge - Event Recommendation Engine

| Comments

Event Recommendation Engine Challenge was my second challenge at Kaggle and I finished 15th out of 225 on final (private) leaderboard. I was able to finish 1st on public leaderboard. Believe or not but the difference doesn’t come from over fitting but rather from an external data source (Google Maps) which was forbidden. I did read the rules, but such important restriction was buried under additional layer of rules which I didn’t bother to read. So moral of the story - if you are doing well then read the rules for second time. Nevertheless it is strange, why the host of the challenge didn’t preprocess the data and didn’t convert location of the users into latitude/longitude format? It would definitely lead to better models, as in my case such conversion gave + 4% in precision.

For this competition I used random forest almost exclusively and devoted all my time for the feature derivation. For the final prediction I built three models and then combined them together:

set.seed(333)
final_model3=randomForest(factor((interested-not_interested)/2+.5) ~ .,data=final_model,importance=TRUE,nodesize=4)

set.seed(33)
final_model1=randomForest(factor((interested-not_interested)/2+.5) ~ .,data=final_model,importance=TRUE,nodesize=4)

set.seed(3)
final_model2=randomForest(factor((interested-not_interested)/2+.5) ~ .,data=final_model,importance=TRUE,nodesize=4)

final_model=combine(final_model3,final_model1,final_model2)

Below you can find a chart with most important features of my final model:

Importance of the features

time_diff - From early beginning I found that the difference between when the event is scheduled to begin and when the user saw the event ad is important feature which is easy to derive.
popularity - How many users said they are interested in the event.
start_hour - Turns out, that it is important to know at what hour an event is going to begin.
friends - The name of this feature might be misleading, nevertheless it keeps how many user friends are invited to the event.
joinedAt The difference between year when the user joined the service and 2000-01-01. I was surprised to find, that such feature has weight at all.
timezone Had few NA values which I replaced by 0. Then I converted timezone numerical value into factor of two hours: 14-12, 12-10 and etc.
birthyear Numeric value.
weekdays On which weekday did the event happen? (Monday, Tuesday and etc).
friend_yes, friend_no, friend_maybe Number of friends which are interested, not interested or maybe in the event.
c_XXX All c_xxxx features were used without preprocessing.
locale I used the first two letter of locale variable.
location_mat Once I found, that external sources such as Google maps are forbidden then I tried to determine if user location shares the same words with event country, state and city descriptions. If it does I would add +1 (max 3) for location_mat variable.
distance (forbidden) the feature scored high, but I had to remove it. The first step was to obtain latitude and longitude for users with known location. If you are interested here is the source code which shows how easily it can be done in R. Need to say, that I did manual and automated data cleaning - converted states names and some frequent errors like “City name 19”. Once I had the coordinates of the users I was able to calculate the distance to event. Then I used k-means to predict user location for those who did not specified it, based on friends location. For example if 5 out of 8 friends of the user are based in Indonesia then user is given Indonesia location and distance to the event is calculated. Here’s the source code for prediction of user location.

Click here if you are interested in the source code of my solution.

Machine Learning for Hackers

| Comments

Which way do you prefer to learn a new material - deep theoretical background first and practice later or do you like to break things in order to fix them? If latter is your way of learning things, then most likely you will enjoy Machine Learning for Hackers.

The book has chapters on machine learning techniques such as PCA, kNN, analysis of social graphs hence even advanced R users might find something interesting. So I want you to show you my example of visualisation of similarity between parliamentarians in Lithuania which idea is taken for chapter 9.

In most of the cases you should be able to get access to voting results of legislative body in your country. Nevertheless the data can be buried in “wrong” format such as html or pdf. I use Scrapy framework to parse html pages, however I have faced a problem, when my IP address was blocked due to many requests (10 000) within 2 hours. But in cloud age the problem was quickly solved and I made a delay in my crawler. Here is the examples of the data in CSV format.

With data in hand it was easy to proceed further. To find similarities between parliamentarians I took voting results of approximately 4000 legislations and built a matrix, where rows represent parliamentarians and columns - legislations. “Yes” votes were encoded as 1, “No” as -1 and the rest as 0. R has a handy function dist to compute the distances between the rows (parliamentarians) of a data matrix. The result of the function is one dimension data of the distance between parliamentarians, however to reveal the structure of a data set we need two dimensions. Once again, R has a function cmdscale which does Classical Multidimensional Scaling (CMS). I found this document very useful in explaining Multidimensional Scaling. Here is the final result:

Click on the image to enlarge.

The plot above reveals, that right wing party TSLKD has a majority in parliament and LSDP (socialists) are in opposition and liberals (LSF, JF, MG) are in the center. You might argue, that that is already known, however the plot is based on actual data, therefore differences in voting support outlooks of the parliamentarians(right, central, left).
The map shows which members of the party are outliers and which one from the other party can be invited while forming a new parliament (second tour of the election is on the way).
Members of the left wing are mixed up and it would make sense to them to merge or form a coalition.

Are you looking for source code? Click here.

Garmin Data Visualization

| Comments

People go on rage, when governments initiate surveillance projects like CleanIT, nevertheless share very private data without a doubt.

I have to admit, that some data leaks are well buried in the process. Take for example Garmin which produces GPS training devices for runners. In order to see your workouts you are forced to upload sensitive data on internet. In response you are given a visualization tool and a storage facility. What are alternatives? It seems, that in the past there was a desktop version, however I was not able to find it. So, we are left with the last option - hack it.

First of all you need to transfer data from Garmin device to computer. I own Forerunner 610 with relays on ANT) network and I found Python script with takes care of data transfer. Once data is transfered there is another obstacle - Garmin uses a proprietary format FIT. In order to tackle this problem I use another Python script which I have adapted to have csv format.

Once data is in CSV format R can be used to plot data.

I had a lot of fun by trying to understand Garmin longitude and latitude coordinates. Here is a short explantion by Hal Mueller:

The mapping Garmin uses (180 degrees to 231 semicircles) allows them to use a standard 32 bit unsigned integer to represent the full 360 degrees of longitude. Thus you get the maximum precision that 32 bits allows you (about double what youâd get from a floating point value), and they still get to use integer arithmetic instead of floating point.

Source code

Building a Presentation, Report or Paper in R

| Comments

If you need to build a presentation, obviously you have following options:

  • Powerpoint alike presentation

  • Online engines

  • LaTex

The first two are beloved by business people and the third one is widely used in academia. The objective of the first group is shiny presentation, contrary to the second where asceticism and demand for automation are top priorities. However, if you are data scientist or any other data specialist with a need to build an automated report, then you know, that LaTex is just wrong.
LaTex allows you to build a shiny presentation or outstanding paper, however it can take light years to build something useful for beginners . If you never tried LaTex here is an example of the monster - you literally have to code a document or presentation:

<code>\documentclass{article}
\title {Investment strategy}
\author {Dzidorius Martinaitis}
\begin{document}
\maketitle</code>

So, what do you do, if you need only 1% of all LaTex features and a report/document needs to be build automatically? Turns out, that HTML little brother Markdown is saving the world. Markdown(.md) source files are easy to read and easy to write and you can convert it into .html, .pdf, .docx, .tex or any other format. There are many ways to do conversion, however I use Pandoc utility. By the way this post was written in markdown in Vim and you can check the source file.

However, the nicest thing about Markdown is integration with R. You can build your report in one file, where R code would be embed in Markdown. Knitr package will help you to convert R code into Markdown simply by calling this piece of code:

<code>require(knitr);
knit('workshop.Rmd', 'workshop.md');</code>

Below you will find an excerpt of .Rmd file which is mix of R and Markdown:

<code>Get the data
===

Who is tweeting about #Haxogreen

```{r results=asis,comment=NA, message=FALSE}
require(twitteR)
load('tweets.Rdata')
names=sapply(tweets,function(x)x$screenName)
rez=(aggregate(names,list(factor(names)),length))
rez=rez[order(rez$x),]
colnames(rez)=c('name','count')
options(xtable.type = 'html')
require(xtable)
xtable(t(tail(rez,6)))
```

Plot top10 tweeters
===
```{r topspam, figure=TRUE,fig.cap=''}
barplot(tail(rez$count,10),names.arg=as.character(tail(rez$name,10)),cex.names=.7,las=2)
```</code>

Here is a workshop presentation which contains the example above - I built it for Haxogreen hackers camp and source code can be found on gitHub.

How to Track Twitter Unfollowers in R

| Comments

I have Twitter account and it is relatively easy to see new followers or subscribers. However, I was looking for ways to know who are the unfollowers. I have noticed, that some (un)subscriptions happen in bulks, which made me thinking that either I tweeted some bullshit and upset bunch of people or spam bots work in sync. With that in mind I have created a simple R script, which produces Markdown and html reports about unfollowers. You can find an example of such report below.

The advantage of this script is that it does not require you to sign or share your data. You just have to install twitter package and you ready to go. Nevertheless, if you want to create Markdown report, you need to install markdown and knitr packages.

The source code and build scripts can be found here.

```{r echo=FALSE,message=FALSE}
require(twitteR)
setwd('~/git/twitTracker/')
usr=getUser("dzidorius")
tmp=sapply(usr$getFollowers(),function(x)x$screenName)

if(!file.exists('users.csv'))
{
  ## when file doesn't exist - take users list and add some artificial user
  write.table(c(as.character(tmp),'dzidorius'),'users.csv')

}

old_list=as.character(read.table('users.csv')$x)
users=lookupUsers(old_list[which(!(old_list %in% as.character(tmp)))])
if(length(users)==0)
{
  ## stop() doesn't work under knitr
  cat('no one left you')  

}
```


```{r comment=NA,echo=FALSE,message=FALSE,results='asis'}
for(i in 1:length(users))
{
cat(paste("**",users[[i]]$name," @",users[[i]]$screenName,"**", "\n===\n",sep=""))
cat(paste("![](https://api.twitter.com/1/users/profile_image?screen_name=",users[[i]]$screenName,
          "&size;=bigger)",sep=''))
cat(paste("  \n**Created:** ",users[[i]]$created,
          "  \n**Spam rate:** ",round(users[[i]]$followersCount/users[[i]]$friendsCount,digits=2),
          "  \n**Activity:** " , users[[i]]$statusesCount,
          "  \n**Location:** ", users[[i]]$location,"  \n",users[[i]]$description,"  \n",
          "**Last status:** ",(users[[i]]$lastStatus$text),"\n\n",sep=""))
}
```

```{r comment=NA,echo=FALSE,message=FALSE,results='asis'}
write.table(as.character(tmp),'users.csv')
```

Dzidas @dzidorius

Created: 2010-02-18 13:09:29 Spam rate: 0.98 Activity: 315 Location: Luxembourg Java, C++ and R developer & data junkie Last status: It is rainy summer in #Luxembourg but there is a party! http://t.co/FZ7eZq6u

Data Mining for Network Security and Intrusion Detection

| Comments

In preparation for “Haxogreen” hackers summer camp which takes place in Luxembourg, I was exploring network security world. My motivation was to find out how data mining is applicable to network security and intrusion detection.

Flame virus), Stuxnet, Duqu proved that static, signature based security systems are not able to detect very advanced, government sponsored threats. Nevertheless, signature based defense systems are mainstream today - think of antivirus, intrusion detection systems. What do you do when unknown is unknown? Data mining comes to mind as the answer.

There are following areas where data mining is or can be employed: misuse/signature detection, anomaly detection, scan detection, etc.

Misuse/signature detection systems are based on supervised learning. During learning phase, labeled examples of network packets or systems calls are provided, from which algorithm can learn about the threats. This is very efficient and fast way to find know threats. Nevertheless there are some important drawbacks, namely false positives, novel attacks and complication of obtaining initial data for training of the system. The false positives happens, when normal network flow or system calls are marked as a threat. For example, an user can fail to provide the correct password for three times in a row or start using the service which is deviation from the standard profile. Novel attack can be define as an attack not seen by the system, meaning that signature or the pattern of such attack is not learned and the system will be penetrated without the knowledge of the administrator. The latter obstacle (training dataset) can be overcome by collecting the data over time or relaying on public data, such as DARPA Intrusion Detection Data Set. Although misuse detection can be built on your own data mining techniques, I would suggest well known product like Snort which relays on crowd-sourcing.

Anomaly/outlier detection systems looks for deviation from normal or established patterns within given data. In case of network security any threat will be marked as an anomaly. Below you can find two features graph, where number of logins are plotted on x axis and number of queries are plotter on y axis. The color indicates the group to which points are assigned - blue ones are normal, red ones - anomalies.

alt text

Anomaly detection systems constantly evolves - what was a norm year ago can be an anomaly today. The algorithm compares network flow with historical flow over given period and looks for outliers with are far away. Such dynamic approach allows to detect novel attacks, nevertheless it generates false positive alerts (marks normal flow as suspicious). Moreover, hackers can mimic normal profile, if they know that such system is deployed.

The first task when implementing anomaly detection (AD) is collection of the data. If AD is going to be network based, there are two possibilities to collect aggregated data from network. Some Cisco products provide aggregated data as Netflow protocol. However, you can use Wireshark or tshark to collect network flow data from the computer. For example:

tshark -T fields -E separator , -E quote d -e ip.src -e ip.dst -e tcp.srcport -e tcp.dstport -e udp.srcport -e upd.dstport -e tcp.len -e ip.len -e eth.type -e frame.time_epoch -e frame.len

Once you have enough data for mining process, you need to preprocess acquired data. In the context of intrusion, anomalous actions happen in bursts rather than single event. Varun Chandola et al. proposed to derive following features:

  • Time window based: Number of flows to unique destination IP addresses inside the network in the last T seconds from the same source Number of flows from unique source IP addresses inside the network in the last T seconds to the same destination Number of flows from the source IP to the same destination port in the last T seconds host based - system calls network based - packet information Number of flows to the destination IP address using same source port in the last T seconds

  • Connection based: Number of flows to unique destination IP addresses inside the network in the last N flows from the same source Number of flows from unique source IP addresses inside the network in the last N flows to the same destination Number of flows from the source IP to the same destination port in the last N flows Number of flows to the destination IP address using same source port in the last N flows

Below you can find an example of feature creation in R. The dataset was created by calling tshark script, which is specified above.

#load data
tmp=read.csv(‘stats2.csv’,colClasses=c(rep(‘character’,11)),header=F)
#get rid of everything below min. in timestamp
tmp[,10]=as.integer(as.POSIXct(format(as.POSIXct(as.integer(tmp[,10]),origin=‘1970-01-01’),‘%Y-%m-%d %H:%M:00’)))
#fix some rows
tmp=tmp[-which(sapply(tmp[,1],function(x) nchar(x)>15)),] tmp=tmp[which(!is.na(tmp[,4])),]

#aggregate date by 5 mins. it assumes, that flow is continuous
factor=as.factor(tmp[1:5000,10])

feature=do.call(rbind, sapply(seq(from=1,to=length(factor),by=4),function(x){ return(list(ddply( subset(tmp,factor==levels(factor)[x:(x+4)]),.(V1,V4),summarize,times=length(V11),.parallel=FALSE ))) }))

After preprocessing the data we can apply local outlier detection, KNN, random forest and others algorithms. I will provide R code and practical implementation of some algorithms in the following post.

While preparing this post, I was looking for the books, I found only few books covering data mining and network security. To my surprise Data Mining and Machine Learning in Cybersecurity book includes both topics and well written. However, if you are security specialist looking for data mining books, you can read my summary of “Data Mining: Practical Machine Learning Tools and Techniques”

My First Competition at Kaggle

| Comments

For me Kaggle becomes a social network for data scientist, as stackoverflow.com or github.com for programmers. If you are data scientist, machine learner or statistician you better off to have a profile there, otherwise you do not exist.

Nevertheless, I won’t bet on rosy future for data scientist as journalists suggest (sexy job for next decade). For sure, the demand for such specialists is on rise. However, I see one big threat for data scientist - Kaggle and similar service providers. You see, such services allows to tap high end data scientists (think of PhD in hard science) at minuscule fraction of real price. Think of Hollywood business model - top players get majority of the pool and the rest is starving. If you try the same service model on IT projects you will most likely get burned. My reasoning can be wrong, but I suspect, that project timespan is the issue - IT projects can take for while to finish (1-10 years), but main stream ML project won’t take that long.

Notwithstanding these obstacles, machine learning, information retrieval, data mining and etc. is a must with ability to write code for production, deal with streaming big data and cope with performance of intelligent system. Then, in programmers parlance, you will became “data scientist ninja” and every company will die for you. There is a good post on the subject on mikiobraun blog, but I mind you, that it is a bit controversial.

Although for last 4 years I often has been working on financial models and time-series, this competition added a new experience to me and hunger for the knowledge. During competition I found this book very practical and plentiful of ideas what to do next: Data Mining: Practical Machine Learning Tools and Techniques. As complimentary book I used Data Mining: Concepts and Techniques, though most of information can be found in one of them. I will try to summarize some chapters in my own story.

Understanding the data. ”Online Product Sales” competition metadata (data about data) is miserly - there are three types of the data - the date fields, categorical fields, quantitative fields and response data for next 12 months. However metadata is most important element in all ML projects, which can save you a lot of time once you understand it better and it leads to much better forecast if you have “domain knowledge”.

Cleaning data. There is famous phrase: “garbage in garbage out”, meaning, that before any further action you have to detect and fix incorrect, incomplete or missing data. You have many possibilities to deal with missing data - remove all rows, where the data is missing; replace it with mean or regressed value or nearest value and etc.  If your data is plentiful and missing values are random (meaning, that NA values do not bear any information) - just get rid of them. Otherwise you need impute new values based on mean or other technique. Mean based replacement worked best for me in this competition. Outliers are another type of the troubles. Suppose, that variable is normally distributed, but few variables are far away from the center. The easiest solution would be to remove such values - as many do in finance by removing “crisis period”.  When next crisis hits, the journalists are rushing to learn a new buzzword- black swan. Turns out, that outliers can’t be ignored, because the impact of them is huge.  So be precautious while dealing with outliers.

Feature selection. It was surprising to me that too many features or variables can pollute forecast, therefore you need to do feature selection. Such task can be done manually be checking correlation matrix, co-variance and etc. However, random forest or generalized boosted methods can lead to better selection. In R you just call randomForest() or gbm() and job is done.

Variable transformation - a way to get superior performance. “Online Product Sales” competition has two date fields, however these fields encoded as integers. By transforming these variables into date and retrieving year and month led to better performance of the model. In most of cases taking logarithm for numeric fields gives performance boost. Scaling (from 0 to 1 or from -1 to 1) and centering (normal distribution) might be considered when linear models are in use.  It is worth to transform categorical variables as well, where 1 would mean, that a feature belongs to the group and 0 otherwise. Check model.matrix function in R for latter transformation and preProcess function in caret package for numerical variables.

Validation stage - helps you to measure performance of the model. If you have huge database to build a model you can divide you set into two/three parts - for training, testing and cross validation and you are ready to start. However, if you are not so lucky, then other methods come to play. Most popular method is division of the set into two groups, namely “training” and “test” and rotating it for 10 times. For example, you have 100 rows, so you take first 75 for training and 25 for test and you check the performance ratio. In the next step you take the rows from 25 to 100 for training and you use first 25 for test. Once you repeat such procedure 10 times, you have 10 performance ratios and you take average of it. Stratified sampling is a buzzword, which you should know when you do a sampling. Keeping all this information in mind I wasn’t able to to implement accurate cross validation and my results differ within 0.05 range.

Model selection and ensemble. Intuitively you want to choose the best performing algorithm, however the mix of them can lead to superior performance. For regression problem I trained four models (two random forest versions, gbm, svm), made the predictions, averaged the results and that led to better prediction.