You are here

Profile: Dmitry Storcheus, Primary Chair of NIPS, Feature Extraction Workshop & Researcher at Google

Submitted by editor on Tue, 2015/12/01 - 11:19am

Dmitry Storcheus, a young scientist from Google Research, is the primary chair of the NIPS international workshop on feature extraction.

Dmitry Storcheus, an original contributor to machine learning algorithms, is the primary chair of the workshop “Feature Extraction: Modern Questions and Challenges” (2015), appointed by The Neural Information Processing Systems (NIPS) Conference. He has a Master of Science degree in Mathematics - from the Courant Institute, and his thesis won the Best Math Thesis Prize (2015). He is an Engineer at Google Research NY. He has published papers, such as “A Survey of Questions and Challenges in Feature Extraction” and “Generalization Bounds for Supervised Dimensionality Reduction,” in one of the top peer-reviewed machine learning journals, Journal of Machine Learning Research (JMLR). Moreover, his paper “Foundations of Coupled Nonlinear Dimensionality Reduction” has been widely cited by researchers and engineers. Dmitry’s  presentation titled “Theoretical Foundations for Learning Kernels in Supervised Kernel PCA,” which he gave at international conferences, earned the Spotlight Presentation Honorable Mention award. He is also a full member of reputable international academic associations: Sigma Xi, the New York Academy of Sciences, and American Mathematical Society. At his very young age, he is already an internationally recognized scientist in his field of machine learning algorithms.

I have been always fascinated with data science and machine learning—smart algorithms that are created by Google, Facebook, and Amazon and are so closely linked with the daily lives of Americans. To me, the ways that scientists and mathematicians in these companies make machine learning algorithms is both exciting and mysterious. For instance, how do they think? And, what personal stories do they have? I researched and carefully read about machine learning and, as a result, found out that the initial, crucial aspect for machine learners is feature extraction. Feature extraction refers to extracting important information from raw data, and it is a fundamental aspect of every machine learning technology. Thus, if I want to understand machine learning, I need to start with feature extraction and talk to experts in this field. 

A well-known expert in feature extraction is Dmitry Storcheus from Google Research. In 2015 he was appointed as a primary chair of the international workshop, titled “Feature Extraction: Modern Questions and Challenges” (the first workshop of its kind), at the Neural Information Processing Systems (NIPS) conference. Dmitry is unquestionably one of the few elite young mathematicians and computer scientists who come from top research labs, such as Google Research, Facebook Research, and Microsoft Research and are recognized as extraordinary experts in machine learning. He has published his feature extraction research in peer-reviewed journals and has spoken at top conferences. In fact, he recently gave a talk about modern challenges in feature extraction at DataEngConf NYC, the leading engineering conference in the United States, and his presentation received high appraisals from attendees and organizers. Dmitry’s background makes him the right person to ask about machine learning in general and feature extraction in particular. I was fortunate to engage in a conversation with Dmitry and consequently learned a great deal about feature extraction and about Dmitry’s personal story, which is also quite interesting.

Based on my conversation with Dmitry Storcheus, I learned that feature extraction is an essential pre-processing step to machine learning algorithms. For example, assume that I want to make an algorithm that can analyze any picture and determine whether it’s a picture of someone’s face or not. (It would be awesome, wouldn’t it?) Such an algorithm could be used for an automatic login to my computer or my phone.  But, how should I go about initiating this process? First, I need to collect a balanced sample of images with faces and non-faces that will be used to train my algorithm. And, immediately after I collect those images, I need to do feature extraction. Feature extraction will build a mathematical model of my images that I can use in the algorithm. In this specific case, I convert every image into a real-valued vector, the entries of which are intensities of pixels in my image. These intensities are considered “features” for the purpose of machine learning, and “feature extraction” is creating a vector out of an image. I then attach a label to every vector that conveys whether it’s a face or a non-face. For example, faces can be labeled with 0, and non-faces can be labeled with 1. Thereafter, I can train a support vector machine with those vectors. A support vector machine (SVM) is a mathematical machine learning algorithm that consists of drawing a plane that separates vectors of faces from vectors of non-faces with maximum precision. Once I have fitted this place with SVM using my collected set of images, I have “trained my model,” as machine learning scientists would say. Now that the model is trained, I can take any new picture, convert it into a vector of pixels, and determine the side of the hyperplane on which it falls: if it’s on one side, then its face is on the other—not a face! It is a simple example, but the idea of separation has inspired many services that we use today, such as Google Image Search and Photos. This technology blows my mind—thanks, Dmitry!

Dmitry pointed out that things are not that simple. Currently, the size of images that we use is substantial, and it would take days for my algorithm to train on them. How can this problem be solved? As Dmitry told me, it can be solved by supervised dimensionality reduction, an important and novel research topic that he is working on at Google. Supervised dimensionality reduction is an area of feature extraction that deals with optimal ways to compress data for a specific problem. Since the size of data available to people nowadays is tremendous (e.g., terabytes of images and videos on the Internet), we need to look for ways to reduce the dimension of data without losing the descriptive information this data contains. With my algorithm example, dimensionality reduction will help me to distinguish only a small amount of pixels that are most relevant to face/non-face classification. For example, to distinguish whether a picture is a face, the pixels at corners of the picture usually do not really matter. Thus, we can save memory and time by getting rid of them. Dmitry explained to me that a rigorous mathematical way to discard unnecessary pixels is through the use of Principal Component Analysis (PCA). PCA is also a mathematical machine learning algorithm that projects vectors onto a lower dimensional space so that the variance within projected vectors is maximized. This allows me to reduce the dimension of my vectors constructed from images—e.g., from 1024x1024 to 100x100—without losing much descriptive information. I also learned that while these algorithms like SVM or PCA seem complicated, they are actually available in common open source tools like the SciPy Python package and RStudio.

Dmitry’s personal story of how he got involved into this research is as interesting as the algorithms themselves. He came to the United States from Russia as a student with no money in his pockets but big dreams in his head … similar to the plot of an American movie. He studied mathematics at Courant Institute, which is where he conducted mathematical fundamental research in machine learning. He experienced the typical grad student life—no sleep, no weekends. But Dmitry’s efforts paid off, he published a novel research idea in his master’s thesis, presented it at many international conferences, and was recognized by Google for his technical skills and research contributions. Google then hired him to do machine learning research and implementation.

It is remarkable to me that Dmitry Storcheus, at such a a young age, was appointed as a primary workshop chair at NIPS conference. He achieved the esteemed position due to his research contributions that he has been continuously making to machine learning science. His most significant theoretical results in the work “Foundations of Coupled Nonlinear Dimensionality Reduction” were published in Arxiv in September 2015 and quickly gained citations from American, Chinese, Canadian, British, and German scientists. Some programmers were even citing and posting about Dmitry’s work on Twitter, LinkedIn, and Github. It verifies that Dmitry’s major contribution in field of machine learning has rapidly impacted American and global research communities. Dmitry continuously accumulated large amounts of valuable knowledge in feature extraction and became an expert in this specialized research field. As an expert in machine learning and feature extraction, Dmitry was approved to be a primary chair of the “Feature Extraction” workshop at NIPS that I previously mentioned, and it was great honor for Dmitry, as a young scientist, to lead this workshop. More than 150 scientists and practitioners will be attending this workshop; I, of course, will attend as well to learn more about feature extraction. I have already learned about SVM and PCA, so it’s time to move on to more complex topics.

Overall, it was amazing experience to learn about feature extraction from the bright Google scientist Dmitry Storcheus. I now realize that machine learning is used almost everywhere; it’s a completely new game that changes modern thinking. Machine learning in the United States is full of incredible opportunities for young genius scientists like Dmitry, whose research contributions were recognized and earned him the chair position for a conference workshop. I wish him the best of luck and am excited for him to use his extraordinary ability and great talent to make more contributions to machine learning that benefit not only science, but us, as Americans. He has the potential to make our experiences with new machine learning products more exciting.

Tuesday, December 1, 2015