Programming Question - 2D array spiral Fashion

Programming Question: given a 2D array, print it in spiral fashion (easy)

Given a 2D array print it in spiral fashion as shown in the picture.

Solution:

Yet Another one on Fibonacci serie - Knuth multiplication

Fibonacci serie - Knuth multiplication - Java

Leonardo Fibonacci,the Italian mathematician (from Pisa, 1170, one of the most important mathematician of all time) invented the famous serie defined recursively as follows:

    \[F(n) = F(n-1) + F(n-2)\]

    \[F(0) = 0\]

    \[F(1) = 1\]

We all know that as programmers once in life (at least) we will write some piece of code to compute the n_{th} element of the serie. Recursive implementation is straightforward but inefficient, due to the exponential number of recursive calls, that make "hard" to compute numbers bigger that 40. It is very easy to come up with some iterative or memoised implementation, that lowers the complexity to O(n) and allows to efficently compute "any" number of the serie.
In this post we will derive an implementation (JAVA) that have complexity O(log_2(n))1)using results from Knuth, Fibonacci multiplication,1988. Read On…

References   [ + ]

1. using results from Knuth, Fibonacci multiplication,1988

Haskell - Sudoku Solver - List Monad

Haskell Sudoku Solver - List Monad

In this article we will write a Sudoku solver in Haskell using the expressive and powerful List Monad to enumerate the solutions trasparently. I assume that the reader has a general understanding of monads (if not read http://www.idryman.org/blog/2014/01/23/yet-another-monad-tutorial/)

Su Doku means literally  number place is the name of a popular puzzle game that a special case of the old Latin Square (invented by Leonard Euler). Latin Square force each symbol to appear once in each row and column. In addition to these contraint Sudoku force each symbol to apper one in each 3 \times 3 sub-matrix. In the general case Sudoku is NP-Complete (easy reduction to graph coloring), but certain schemeas can be solved by deduction and logic only (in this case the puzzle is said well constructed).

At high level a Sudoku can be viewed as a list of cell

data Board= Board [Square]
    deriving(Show)

and a Square must certanly carries information about its position and the digit it contains, or even better it could contain (according to the cells in same row, column and submatrix)

data Square= Square ColDigit RowDigit BoxDigit (Either [Digit] Digit)
    deriving (Show)

type RowDigit= Digit
type ColDigit= Digit
type BoxDigit= Digit
type Digit= Char

While solving the puzzle each square will contains either a list of possible digits that can be placed in the square OR (this is the Either logic) a fixed digit (i.e., the square was given by a clue or has been deduced).

The following Squares have respectively position (4,1) and (4,2), both lie in submatrix 4 and s1 can contain the digit 1 OR 5 OR 7 OR 4 while s2 is fixed to 6.

let s1=Square '4' '1' '4' (Left ['1','5','7','4'])
let s2=Square '4' '2' '4' (Right '6')

The solver works s.t. it return the list of possible solution for a schema (not well constructed Sudoku may have multiple solutions). In particular if no unresolved square are left (the one with the Left in the Ethier part) it means that the board is solved and does not need any further processing, otherwise we take the square with the minimum number of possible values and try solve the board trying, one at the time, all the values for it. This sort of backtracking is perfomed transparently by the List monad that is used to represent nondeterminism (i.e. multiple possibilities).

solveMany :: Board -> [Board]
solveMany brd =
	case getUnsolvedSquare brd of
		[] 	-> return brd --all squares correctly set up
		sqs -> do
		let selSquare@(Square c r b (Left ds)) : _ = sortBy (byLeftLenght) sqs
		sq <- [ Square c r b (Right d) | d <- ds ] -- Try all possible moves
		solveMany (setSquare sq brd)

Recall that do notation is just syntactic sugar for bind (>>=) operator and that for for list it is defined as following

(>>=) :: [a] -> (a->[b]) >[b]
xs >>= f = concat (map f xs)

It then applies f=solveMany (that takes a board) and produces [Board] for all the digits in the Left of selSquare.
Here the same function wrote with explicit bind operator application that makes much clear the monadiac operation that do the dirty job for us.

solveMany :: Board -> [Board]
solveMany brd =
	case getUnsolvedSquare brd of
		[] 	-> return brd --all squares correctly set up
		sqs ->
			let Square c r b (Left ds) : _ = sortBy (byLeftLenght) sqs
			in [ Square c r b (Right d) | d <- ds ] >>=(\b->solveMany (setSquare b brd))

getUnsolvedSquare :: Board -> [Square]
getUnsolvedSquare (Board sqs) = filter (isSolved) sqs
	where
		isSolved (Square _ _ _ (Left _)) = True
		isSolved _						  = False

The setSquare function is the one devoted to the constraint propagation and is defined as follow:

setSquare :: Square-> Board ->Board
setSquare sq@(Square scol srow sbox (Right d))(Board sqs) = Board (map set sqs)
	where
		set osq@(Square ocol orow obox ods)
			| scol==ocol && srow==orow  			 = sq
			| scol==ocol || srow==orow || sbox==obox = Square ocol orow obox (checkEither ods)
			| otherwise 							 = osq

		checkEither (Left  lds ) 			= Left (lds \\ [d])
		checkEither r@(Right d') | d==d'	= error "Already set this square"
		checkEither dd = dd
setSquare _ _ = error "Bad call to setSquare"

It places Square (with a Right Digit) on a Board and return the new Board taking care of removing the just inserted digit from the possible values for Squares of the same row, column and submatrix. This is how the constraint are propagated, cutting off the search space.

The complete source code is available here and on github.

Stream Compaction on GPU - Efficient implementation - CUDA

Stream Compaction - Introduction1)Markus Billeter et al. Efficient Stream Compaction on Wide SIMD Many-Core Architectures2)InK-Compact-: In kernel Stream Compaction and Its Application to Multi-kernel Data Visualization on GPGPU- D.M. Hughes


An efficient implementation of stream compaction on GPU is presented with code example. Full source code on github.

Stream compaction/reduction is commonly referred as the operation of removing unwanted elements in a collection. More formally, imagine we have a list of element A_{0...N} of N elements and a predicate p : A \to\left \{ True,False\right \} that bisects A in wanted and unwanted elements (ones that satisfy the predicates p and ones that don't). The stream compaction of A under p is an Read On…

References   [ + ]

1. Markus Billeter et al. Efficient Stream Compaction on Wide SIMD Many-Core Architectures
2. InK-Compact-: In kernel Stream Compaction and Its Application to Multi-kernel Data Visualization on GPGPU- D.M. Hughes

Haskell Seminar @ UNICAL

Haskell Seminar at University of Calabria

During the last year I got much interest in functional programming and in particular in Haskell.
Very few people write and study Haskell at University of Calabria (one of them is the manteiner of gnome for NixOs) and so I decided to do a introductive lesson (a big thanks go to the guys of the ciotoflow). It was a great experience, and people were enthusiast and curious.
All the material i produced is available for download and use, and hopefully will be gradually updated and improved. All the files are available on github.

Introductive slides here

11021545_814237808665378_1211810699511677832_o

11150854_10206497568108700_7979888715165512906_n

Project Euler - Problem 24

In this article we want to briefly discuss a possible solution to the project euler problem 24 which asks:

A permutation is an ordered arrangement of objects. For example, 3124 is one possible permutation of the digits 1, 2, 3 and 4. If all of the permutations are listed numerically or alphabetically, we call it lexicographic order. The lexicographic permutations of 0, 1 and 2 are:

012 021 102 120 201 210

What is the millionth lexicographic permutation of the digits 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9?

One possible approach could obviously be to brute force all the lexicographical permutations and then take the millionth element out of the previously sorted collection of permutation (interpreted as number). The drawback of this solution is that unfortunately the number of permutations is more than exponential, is O(n!).

We want to show here a more mathematical/combinatorial approach. First of all start considering that how the permutations looks like.
First permutations are (thanks to Stefano Germano for typo correction) \{0,1,2,3,4,5,6,7,8,9\}, \{0,1,2,3,4,5,6,7,9,8\},\{0,1,2,3,4,5,6,8,7,9\},...,\{0,9,8,7,6,5,4,3,2,1\}.
A deeper look at the permutations reveals that are all the sequences starting with 0 and concatenated with a permutation of the ramaining 9 elements [1,2,3,4,5,6,7,8,9]. How many of them? 9!.
It means that the permutation we are looking for  has to be greater than the 9!=362880th permutation: \{0,9,8,7,6,5,4,3,2,1\}.
From this we can deduce that we can also skip all the sequences starting by 1 because they produces (again) 9! permutations and

2 \times 9! < 10^6. Moreover our millioth permutation is in the sequences starting with 2 because 3 \times 9! > 10^6 (hence is not in the permutations starting with 3). Read On…

CUDA - Julia Set example code - Fractals

This article present a CUDA parallel code for the generation of the famous Julia Set. Informally a point of the complex plane belongs to the set if given a function f(z) the serie z_{n+1} = f(z_n) does not tend to infinity. Different function and different initial condition give raise eventually to fractals. One of the most famous serie is z_{n+1} = z_n^2 +c  (the one that generated the video below).
Some nice picture may be obtained with the following initial conditions:

  1. c = 1j # dentrite fractal
  2. c = -0.123 + 0.745j # douady's rabbit fractal
  3. c = -0.750 + 0j # san marco fractal
  4. c = -0.391 - 0.587j # siegel disk fractal
  5. c = -0.7 - 0.3j # NEAT cauliflower thingy
  6. c = -0.75 - 0.2j # galaxies
  7. c = -0.75 + 0.15j # groovy
  8. c = -0.7 + 0.35j # frost

Here a video, showing a sequence of picture generated using different initial conditions. A nice example of how math and computer science can produce art!

CUDA Julia set code

The code is such that it is very easy to change the function and the initial condition (just edit the device function functor).

#define TYPEFLOAT

#ifdef TYPEFLOAT
#define TYPE float
#define cTYPE cuFloatComplex
#define cMakecuComplex(re,i) make_cuFloatComplex(re,i)
#endif
#ifdef TYPEDOUBLE
#define TYPE double
#define cMakecuComplex(re,i) make_cuDoubleComplex(re,i)
#endif

__global__ void computeJulia(int* data,cTYPE c,float zoom){
	int i =  blockIdx.x * blockDim.x + threadIdx.x;
	int j =  blockIdx.y * blockDim.y + threadIdx.y;

	if(i<DIMX && j<DIMY){
		cTYPE p = convertToComplex(i,j,zoom);
		data[i*DIMY+j]=evolveComplexPoint(p,c);
	}
}


cTYPE c0;
__device__ cTYPE juliaFunctor(cTYPE p,cTYPE c){
	return cuCaddf(cuCmulf(p,p),c);
}

__device__ int evolveComplexPoint(cTYPE p,cTYPE c){
	int it =1;
	while(it <= MAXITERATIONS && cuCabsf(p) <=4){
		p=juliaFunctor(p,c);
		it++;
	}
	return it;
}


__device__ cTYPE convertToComplex(int x, int y,float zoom){

	TYPE jx = 1.5 * (x - DIMX / 2) / (0.5 * zoom * DIMX) + moveX;
	TYPE jy = (y - DIMY / 2) / (0.5 * zoom * DIMY) + moveY;
	return cMakecuComplex(jx,jy);

}


The code above are the GPU instructions that are devoted to the computation of a single set. The kernel computeJulia() is executed on DIMX*DIMY (the dimension of the image) threads that indipendently compute the evolveComplexPoint() function on the converted to complex corrensponding pixel indices, the viewport (each threads compute a single pixel that has integer coordinates). The evolveComplexPoint() take care of checking how fast che point diverges.

The code is available here on github. It generates a number of png tha can be then merged in a video (as the one I present here) using a command line tool as ffmpeg.

In order to compile and run it need the CUDA SDK,a CUDA capable device and libpng installed on your system.
To compile and/or generate the video simply hit:

#compilation
nvcc -I"/usr/local/cuda-6.5/samples/common/inc"  -O3 -gencode arch=compute_35,code=sm_35  -odir "src" -M -o "src/juliaSet.d" "../src/juliaSet.cu"

nvcc  -I"/usr/local/cuda-6.5/samples/common/inc" -O3 --compile --relocatable-device-code=false -gencode arch=compute_35,code=compute_35 -gencode arch=compute_35,code=sm_35  -x cu -o  "src/juliaSet.o" "../src/juliaSet.cu"

/usr/local/cuda-6.5/bin/nvcc --cudart static --relocatable-device-code=false -gencode arch=compute_35,code=compute_35 -gencode arch=compute_35,code=sm_35 -link -o  "JuliaSetDynParallelism"  ./src/juliaSet.o ./src/main.o   -lpng

#video generation
ffmpeg -f image2 -i "julia_%05d.png" -r 40 output.mov

Here another animation

 

Fold operator - a brief detour

This article aims to be a brief practical introduction to new haskell programmer about the fold high order function (which roughly speaking is a function that takes as input and/or produces as output a function) and its usage. It will also outline a bit of the theory that lies behind this operator (the term operator comes from the field of recursion theory (Kleene, 1952), even if, striclty speaking fold is a function in haskell).

Haskell Fold - Introduction and formal definition

Many problem in function programming are solved by means of tail recursive functions (one that do not perform any other operations that need some values from the recursive call). For instance let's say we want to get the product of integers stored in a list.
What we would do is to write down a tail recursive function (more or less) like this

tailmultiply :: [Int] -> Int -> Int
tailmultiply [] acc     = acc
tailmultiply (x:xs) acc = tailmultiply xs (acc * x)

Note, to stress the tail recursion, a definition like the one that follows would not be tail recursive.

Read On…

Kernel Linux List - An Efficient Generic Linked List

Linux Kernel Linked List

Linux Kernel especially in the past years was plenty of different and replicated implementation of generic data structures such as linked lists, stacks and queues.  Kernel's developers decided to provide a unique set of API for using these kind of software machineries. We will briefly look in this article  at the list.h file (in the linux source tree under /usr/src/linux-headers-`uname -r`/include/) that provides macros and function for utilizing a doubly linked circular list. It is a very smart written piece of code that is worth to take a look at as it is  an opportunity  to improve  C programming skills and at the same time to see an "unconventional" implementation for a common data structure.

From canonical list to Linux kernel linked list

The canonical implementation of a C's linked list consists of a structure and two (for a doubly linked) recursive pointer fields to the structure itself. The links to the previous and subsequent element of the list.


struct point2DNode{
 int x,y;
 struct point2DNode* prev, next;
};

Read On…

Francis Poulenc - Rapsodie Negre

To fall in the void as I fell: none of you knows what that means… I went down into the void, to the most absolute bottom conceivable, and once there I saw that the extreme limit must have been much, much farther below, very remote, and I went on falling, to reach it.” [Cosmicomics, Italo Calvino]

Apple Xserve xeon Linux/Debian armed

Finally at my Departement we succesfully converted an  unsed  Apple xserve  cluster to a full functional Linux/Debian machine.

The cluster is make up of eleven node connected by Myrinet 10G low latency network and each node is a composed by a dual 4x cores Intel Xeon 2.8 Ghz Harpertown with 10Gb of DDR2 for a total of 88 cores. It is a quite old and small machine and hence was unused. We thought that  it could have been useful if well re-configured as an ad-hoc machine for small sized scientific computation session, as  developing and testing platform of parallel application or for training of undergraduate students that can experience directly with a real parallel machine. So we decided to cast a new light on it by configuring it as a Debian cluster (according to google just few people did it). 10949741_766948060061020_241789111_n

Read On…

The Grossone Numeral System

A recently developed computational methodology for executing numerical calculations with infinities and infinitesimals. The approach developed has a pronounced applied character and is based on the principle “The part is less than the whole” introduced by the ancient Greeks. This principle is applied to all numbers (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite).(Ya. D. Sergeyev)

 A brief detour in infinity

The infinity is a concept that attracts and fascinate humans from its earlist time. The earliest mathematical account of infinity comes from Zeno of Elea (490-430 BCE) who was a pre-Socratic Greek philosopher of southern Italy (named by Aristotle the "invector of dialectic"). But in order to have a systematic and mathematically rigorouse use of the concept of infinity we have to wait the 17th century, when an English mathematician, Jonh Wallis for the first time used the symbol to denote an infinite quantity in its attempt to divide a region in infinitesimal parts \frac{1}{\infty}. Than, thanks to his concepts of infinity and infinitesimal,  Leibniz was able to develop big part of his work on calculus, but they were still abstract objects,  different from appreciable quantities as finite numbers. But they shared some interesting properties with them (law of continuity,"whatever succeeds for the finite, also succeeds for the infinite").

Cantor set, we begin with a straight line, of length 1 and remove its the middle third and repest the process with the new two pieces over and over again.

 

Read On…

Cellular automata library - CuCCAl

What is it?

A GPGPU CUDA powered cellular automata library , capable of hiding all the complexity behind the GPU programming of classica/complex and macroscopic cellular automata. The aim of this library is to make life easier to researchers that need to write and execute big cellular automata model.

Belousov-Zhabotinsky reaction cellular automaton.

Figure 1. Dewdney's Belousov-Zhabotinsky reaction cellular automaton.

The programmer has only to write the elementary processes that compose the transition function, register the substates and set up the automaton parameters (for example, the number of steps and cellular space size). The Memory management is completely behind the scene, letting the programmer concentrate only on the model he needs to write.

This library is open source. Help make it better!

Read On…

Nikolai Kapustin

Nikolai Kapustin is an **autodidact on composing**; he made his first attempt to compose a piano sonata at age of 13. During his conservatory time he composed and played his Op. 1; a Concertino for piano and orchestra. The Op.1 was a jazz piece and turned out to be his first work performed publicly (1957). He also had his own quintet and was a member of Yuri Saulsky’s Big Band.

Read On…