Microsoft Reasearch is developing a new data-parallel computing platform, and there’s a new video that makes it much more understandable.
Naiad is a programming environment that takes a new approach to running programs on big data sets. The aim is to find faster ways to carry out analysis of big data. Naiad is described as an incremental, iterative, and interactive data-parallel computing platform that is being developed at Microsoft Research Silicon Valley.
The project came about because of experience among the researchers on Dryad and on DryadLINQ. Dryad and DryadLINQ are two Microsoft Research projects that can be used for large data processing in C# on clustered computers. Dryad runs sequential programs in parallel, with the parallel computation organized as a directed acyclic graph with the programs you’re running forming the graph’s vertices while the edges are the communication channels between the programs. DryadLINQ is a compiler that translates LINQ programs to distributed computations that can then be run on clustered PCs.
The experience with this pair of projects showed that many big-data algorithms contain loops, and these loops are often data-dependent, iterating until the answer doesn’t change any more. However, once the answer starts to stabilize, much of the work done in each iteration is redundant with work done in previous iterations, since much of the data are the same. If the later iterations could work on the changed data rather than the entire set, it ought to be possible to achieve significant speed increases. The result of this is Naiad.
The way Naiad works means it extends standard batch data-parallel processing models like MapReduce, Hadoop, and Dryad/DryadLINQ. With Naiad you can have efficient incremental updates to the inputs so Naiad acts as though it’s a stream processing system. Alongside this, Naiad lets you make use of nested fixed-point iteration.
Neural networks achieve great things, but it is slightly worrying that we don't really know how they work. Inceptionism is an attempt to make neural networks give up their secrets by showing us what t [ ... ]