Archive for the ‘Mathematics’ Category

This article discusses (a problem that I recently solved on codility ).

The core of the problem is the following:
Given two non negative integers N and M, , the task is to check whether they have the same set of prime divisors.
A prime divisor of an integer P is a prime d s.t. for some positive . You are given up to of such queries, and should return the total number of them that evaluates to true.

For instance given if and then our function should return *true* because the set of prime divisor of is equal the
the set of primal divisor of i.e. while for and the function should return *false*.

List Cycle Detection

Linked list cycle detection problem is a very instructive and fun problem to reason about. This article will state the problem first and then explains how we can solve it efficiently while giving some insight on the underlying math.

A list can gets corrupted and some node can be linked by more than one node, as in the following figure. This could lead to never ending traversal of the list. So it make sense to solve the following problem:

Cicular List

List Cycle Detection - Problem Statement

1. Given  a linked list, detect if the list is circular i.e. contains a cycle
2. Find the starting node of the cycle (the node with two inward arrows in the figure)

The problem is easily solvable in time and space considering that we can visit the list from the head and store in a separate list the visited nodes. As the visit continues we check if the node we are examining was previously visited (for each node we visit we asks the following question: is that node contained in the support list already?). If yes the list is circular and that node is the start point of the cycle. If we reach the tail of the list then the list is not circular.

We can lower the complexity of this approach down to time using a more efficient support (set) data structure (like a tree set). But we can do much better, and the rest of the article will show how to obtain time and space complexity.

List Cycle Detection - Floyd’s algorithm

This algorithm uses the fact that, like clock's hands, things iterating on a cycle at different speed will eventually meet at some point in the future. Consider two runner with velocities respectively, starting running from the same point in a circular stadium. They will meet again when the slower runner reach the starting point for the second time. Why? By the time the slower one has completed half circle the faster has completed a complete cycle and by the time the slower finishes his run, arriving at the starting point again, the faster has completed a second entire cycle.

Things are a bit more complicated in the list cycle detection problem because the iterators (the two runners) do not necessarily start they race from the circular part of the list

Consider two iterators with velocities  respectively. Suppose the cycle has length and that it starts at node number . When the slower iterator reaches the faster is at location . How many iteration it will take before they meet? And at which node?

The situation is described by the following congruence:

which has solution . This means that they will meet after iteration of the slower iterator. This means that they will meet at nodes before the beginning of the cycle and we can use this fact to count nodes from the beginning of the list to deduce the starting point of the cycle. Once the iterators meet in the cycle, we can move the fast iterator back to the beginning of the list and iterate forward one node per step with both iterators until they match again. When we move the fast iterator back at the head of the list, both iterators are nodes away from the beginning of the cycle. Because of this when we move both of them by one,  they will eventually meet exactly at that node .

Let's consider now the case when . This means that by the time the slower iterator reaches the beginning of the cycle the faster one has completed more that a cycle. What will be the starting point for the faster one? We argue that once reaches , is at node but since , this means that it will be at position . We can now use similar argument to the previous case and write:

• since

which has solution . This means that the meeting point is nodes before the beginning of the cycle. If we do the same operations as the previous case, , we obtain the same result. Iterators will meet at the beginning of the cycle. Why? Well advancing makes cycles possibly several times ( remember that  ) and it will clearly stops at .

In other words the slower pointer is at first  at node number . We can write where . After advancing steps it will be at location   Since the result follows.

As an example consider a list with a cycle of length starting at node number . The first part of the algorithm tells us that the nodes will meet at node . Moving the fast pointer back to the head of the list and iterating one node per time both iterators will lead the slower point to node:

• 12 again after advancing of 4 nodes
• 12 again after advancing of 4 nodes
• 10 advancing of the remaining 2 nodes.

The following illustration depict how the algorithm would work on a list of 8 nodes with a cycle of length 4 starting at node number 4. After 5 steps the slow (blue) and fast (red) iterator points to the same node i.e. node number 6.

After that the fast pointer is moved to the head of the list and both iterator are incremented by 1 until they meet again. When they do, they will meet at the beginning of the cycle.

Execution of the Floyd's algorithm on a list of 8 nodes with a cycle of length 4 starting at node 4.

This article present a CUDA parallel code for the generation of the famous Julia Set. Informally a point of the complex plane belongs to the set if given a function f(z) the serie does not tend to infinity. Different function and different initial condition give raise eventually to fractals. One of the most famous serie is   (the one that generated the video below).
Some nice picture may be obtained with the following initial conditions:

1. # dentrite fractal
3. # san marco fractal
4. # siegel disk fractal
5. # NEAT cauliflower thingy
6. # galaxies
7. # groovy
8. # frost

Here a video, showing a sequence of picture generated using different initial conditions. A nice example of how math and computer science can produce art!

CUDA Julia set code

The code is such that it is very easy to change the function and the initial condition (just edit the device function functor).

#define TYPEFLOAT

#ifdef TYPEFLOAT
#define TYPE float
#define cTYPE cuFloatComplex
#define cMakecuComplex(re,i) make_cuFloatComplex(re,i)
#endif
#ifdef TYPEDOUBLE
#define TYPE double
#define cMakecuComplex(re,i) make_cuDoubleComplex(re,i)
#endif

__global__ void computeJulia(int* data,cTYPE c,float zoom){
int i =  blockIdx.x * blockDim.x + threadIdx.x;
int j =  blockIdx.y * blockDim.y + threadIdx.y;

if(i<DIMX && j<DIMY){
cTYPE p = convertToComplex(i,j,zoom);
data[i*DIMY+j]=evolveComplexPoint(p,c);
}
}

cTYPE c0;
__device__ cTYPE juliaFunctor(cTYPE p,cTYPE c){
}

__device__ int evolveComplexPoint(cTYPE p,cTYPE c){
int it =1;
while(it <= MAXITERATIONS && cuCabsf(p) <=4){
p=juliaFunctor(p,c);
it++;
}
return it;
}

__device__ cTYPE convertToComplex(int x, int y,float zoom){

TYPE jx = 1.5 * (x - DIMX / 2) / (0.5 * zoom * DIMX) + moveX;
TYPE jy = (y - DIMY / 2) / (0.5 * zoom * DIMY) + moveY;
return cMakecuComplex(jx,jy);

}



The code above are the GPU instructions that are devoted to the computation of a single set. The kernel computeJulia() is executed on DIMX*DIMY (the dimension of the image) threads that indipendently compute the evolveComplexPoint() function on the converted to complex corrensponding pixel indices, the viewport (each threads compute a single pixel that has integer coordinates). The evolveComplexPoint() take care of checking how fast che point diverges.

The code is available here on github. It generates a number of png tha can be then merged in a video (as the one I present here) using a command line tool as ffmpeg.

In order to compile and run it need the CUDA SDK,a CUDA capable device and libpng installed on your system.
To compile and/or generate the video simply hit:

#compilation
nvcc -I"/usr/local/cuda-6.5/samples/common/inc"  -O3 -gencode arch=compute_35,code=sm_35  -odir "src" -M -o "src/juliaSet.d" "../src/juliaSet.cu"

nvcc  -I"/usr/local/cuda-6.5/samples/common/inc" -O3 --compile --relocatable-device-code=false -gencode arch=compute_35,code=compute_35 -gencode arch=compute_35,code=sm_35  -x cu -o  "src/juliaSet.o" "../src/juliaSet.cu"

/usr/local/cuda-6.5/bin/nvcc --cudart static --relocatable-device-code=false -gencode arch=compute_35,code=compute_35 -gencode arch=compute_35,code=sm_35 -link -o  "JuliaSetDynParallelism"  ./src/juliaSet.o ./src/main.o   -lpng

#video generation
ffmpeg -f image2 -i "julia_%05d.png" -r 40 output.mov



Here another animation

This article aims to be a brief practical introduction to new haskell programmer about the fold high order function (which roughly speaking is a function that takes as input and/or produces as output a function) and its usage. It will also outline a bit of the theory that lies behind this operator (the term operator comes from the field of recursion theory (Kleene, 1952), even if, striclty speaking fold is a function in haskell).

Haskell Fold - Introduction and formal definition

Many problem in function programming are solved by means of tail recursive functions (one that do not perform any other operations that need some values from the recursive call). For instance let's say we want to get the product of integers stored in a list.
What we would do is to write down a tail recursive function (more or less) like this

tailmultiply :: [Int] -> Int -> Int
tailmultiply [] acc     = acc
tailmultiply (x:xs) acc = tailmultiply xs (acc * x)


Note, to stress the tail recursion, a definition like the one that follows would not be tail recursive.

A recently developed computational methodology for executing numerical calculations with infinities and infinitesimals. The approach developed has a pronounced applied character and is based on the principle “The part is less than the whole” introduced by the ancient Greeks. This principle is applied to all numbers (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite).(Ya. D. Sergeyev)

A brief detour in infinity

The infinity is a concept that attracts and fascinate humans from its earlist time. The earliest mathematical account of infinity comes from Zeno of Elea (490-430 BCE) who was a pre-Socratic Greek philosopher of southern Italy (named by Aristotle the "invector of dialectic"). But in order to have a systematic and mathematically rigorouse use of the concept of infinity we have to wait the 17th century, when an English mathematician, Jonh Wallis for the first time used the symbol to denote an infinite quantity in its attempt to divide a region in infinitesimal parts . Than, thanks to his concepts of infinity and infinitesimal,  Leibniz was able to develop big part of his work on calculus, but they were still abstract objects,  different from appreciable quantities as finite numbers. But they shared some interesting properties with them (law of continuity,"whatever succeeds for the finite, also succeeds for the infinite").

Cantor set, we begin with a straight line, of length 1 and remove its the middle third and repest the process with the new two pieces over and over again.