In some cases the first approach is too big overhead. For instance, if you want to understand how the kernel works it is far too complex and slow to re-implement it. It might work to implement a light version of it (a model), which abstracts components that are not interesting for your learning purposes.
The second approach works pretty good, especially if you have previous experience with similar technologies. A proof for this is the paper I wrote – “AngularJS in Patterns”. It seems that it is a great introduction to the framework for experienced developers.
However, building something from scratch and understanding the core underlying principles is always better. The whole AngularJS framework is above 20k lines of code and parts of it are quite tricky. Very smart developers have worked with months over it and building everything from an empty file is very ambitious task. However, in order to understand the core of the framework and the main design principles we can simplify the things a little bit – we can build a “model”.
via james91b/ida_ipython · GitHub.
This is a plugin to embed an IPython kernel in IDA Pro. The Python ecosystem has amazing libraries (and communities) for scientific computing. IPython itself is great for exploratory data analysis. Using tools such as the IPython notebook make it easy to share code and explanations with rich media. IPython makes using IDA Python and interacting with IDA programmatically really fun and easy.
npm package that implements a Javasript kernel for IPython’s graphical notebook (also known as Jupyter). An IPython notebook combines the creation of rich-text documents (including mathematics, plots and videos) with the execution of code in a number of programming languages.
The execution of code is carried out by means of a kernel that implements the IPython messaging protocol. There are kernels available for Python, Julia, Ruby, Haskell and many others.
A repository of IPython notebooks can be found here.
Click to Read: Those planning to attend the tutorial at LISA 2014 on kernel debugging will need to do some prep work prior to attending. This repo will also serve as a place to hold the files needed for following along during the tutorial.
Many classification methods such as kernel methods or decision trees are nonlinear approaches. However, linear methods of using a simple weight vector as the model remain to be very useful for many applications. By careful feature engineering and having data in a rich dimensional space, the performance may be competitive with that of
using a highly nonlinear classifier. Successful application areas include document classification and computational advertising (CTR prediction). In the first part of this talk, we give an overview of linear classification by introducing commonly used formulations. We discuss optimization techniques developed in our linear-classification package LIBLINEAR for fast training. The flexibility over kernel methods in selecting and employing optimization methods can be clearly seen in our discussion. In the second part of the talk, we select a few examples to demonstrate how linear classification is practically applied. They range from small to big data. The third part of the talk discusses issues in applying linear classification for big-data analytics. In our recent work on distributed linear classification, we see several challenges of this research topic. I will discuss them and hope to get your comments.