In this short lesson we will discuss how to parallelize a simple and rather inefficient (because this is not an in-place version) implementation of quick-sort using asynchronous tasks and futures.

We will perform some benchmarking and performance analysis and we will try to understand how we can further improve our implementation.

Quick sort

In this section, I will briefly refresh your memory on quick-sort. I will do so by showing you a simple and self-explicative Haskell version first. We will also write a C++ (serial) version of the same code implementation C++ that we will use as a basis for our parallelization.

Here it goes the Haskell version:

quicksort [] = []
quicksort (p:xs) = (quicksort lesser) ++ [p] ++ (quicksort greater)
where
lesser = filter (< p) xs
greater = filter (>= p) xs


It is beautifully simple: in order to sort a list with at least, p one element is only necessary to partition sort the rest of the elements xs into two sublists:

• lesser: containing all the elements in xs smaller than p
• greater: containing all the elements in xs greater than p
Once both sublists are sorted we can finally return the whole sorted list using by simply returning gluing lesser, p and greater together in this order.

If you still have trouble understanding the quick-sort algorithm please refer to Wikipedia.

Quick-sort serial version

The following is the serial C++ implementation of the same idea described above. It should be pretty easy to map the following implementation to the Haskell one. Run it on Wandbox

template <typename T>
void quick_sort_serial(vector<T>&amp; v) {
if (v.size() <= 1) return;
auto start_it = v.begin();
auto end_it = v.end();

const T pivot = *start_it;

//partition the list
vector<T> lesser;
copy_if(start_it, end_it, std::back_inserter(lesser),
[&amp;](const T&amp; el) { return el < pivot; });

vector<T> greater;
copy_if(start_it + 1, end_it, std::back_inserter(greater),
[&amp;](const T&amp; el) { return el >= pivot; });

//solve subproblems
quick_sort_serial(lesser);
quick_sort_serial(greater);

//merge
std::copy(lesser.begin(), lesser.end(), v.begin());
v[lesser.size()] = pivot;
std::copy(greater.begin(), greater.end(),
v.begin() + lesser.size() + 1);
}


Parallelizing Quick-sort using std::future

In order to speed-up things we are going to use the fact that quick-sort is a divide and conquer algorithm. Each subproblem can be solved independently:
creating and sorting lesser and greater are two independent tasks. We can easily perform both on different threads.

The following is the first parallel version of the quick_sort_serial() above.
Run it on Wandbox

template <typename T>
void filter_less_than(const vector<T>&amp; v, vector<T>&amp; lesser, const int pivot) {
for (const auto el : v) {
if (el < pivot) lesser.push_back(el);
}
}

template <typename T>
void quick_sort_parallel1(vector<T>&amp; v) {
if (v.size() <= 1) return;
auto start_it = v.begin();
auto end_it = v.end();

const T pivot = *start_it;
vector<T> lesser;
auto fut1 = std::async(
std::launch::async,
[&amp;]() {
filter_less_than<T>(std::ref(v), std::ref(lesser), pivot);
quick_sort_parallel1<T>(std::ref(lesser));
});

vector<T> greater;
copy_if(start_it + 1, end_it, std::back_inserter(greater),
[&amp;](const T&amp; el) { return el >= pivot; });

quick_sort_parallel1(greater);

fut1.wait();

std::copy(lesser.begin(), lesser.end(), v.begin());
v[lesser.size()] = pivot;
std::copy(greater.begin(), greater.end(),
v.begin() + lesser.size() + 1);
}


As you can notice, the creation and sorting of lesser and are performed in parallel. Each thread running an instance of quick_sort_parallel1() will create another thread running quick-sort on one of the two sub-problems while the other subproblem is solved by the current thread.

This is exactly what we are doing when we spawn the following async task:
we are creating a task that will populate lesser with all the elements from v less than pivot and, once ready, it will sort it.
Please note that everything we need to have modified by reference need to be wrapped in a std::ref as we discussed in the previous lessons.

The following picture shows how the execution unfolds for the unsorted list: [2,7,1,6,9,5,8,3,4,10]:

The following code shows hot to spawn an async thread solving the lesser subproblem:

  vector<T> lesser;
auto fut1 = std::async([&amp;]() {
filter<T>(std::ref(v), std::ref(lesser), pivot);
quick_sort<T>(std::ref(lesser));
});


While this task is running on the newly created thread, we can solve greater on the current thread.

The asynchronous task will recursively spawn other async tasks until a list of size <=1 is created, which is of course already sorted. There is nothing to do in this case.

Once the main thread is done with sorting the greater list, it waits for the asynchronous task to be ready using the std:.future::wait() function.
Once wait returns, both lists are sorted, and we can proceed with merging the result and finally, here it is, we have a sorted list.

Performance analysis

Let's quickly analyze our implementation. We will compare execution time for the single-thread and async-parallel versions above.

Let's start our analysis by taking looking at this graph depicting the execution time (average of 10 runs) for both versions above:

It might be a surprising result to see that the Async parallel version is way slower than the single threaded version, ~55x slower!
Why is that? The reason is that, the parallel version creates a new thread for every single subproblem, even for the ones that are quite small.
Threads are costly to manage by the OS, they use resources and need to be scheduled. For smaller tasks the overhead caused by the additional thread is larger than the gain in performance that we might get by processing the sublist in parallel. This is exactly what is happening.

In order to solve this issue, we want to modify the async code above so that a new thread is spawned only when the input list v is larger than a certain threshold. The code below implements the aforementioned idea:

template <typename T>
void quick_sort_async_lim(vector<T>&amp; v) {
if (v.size() <= 1) return;
auto start_it = v.begin();
auto end_it = v.end();

const T pivot = *start_it;
vector<T> lesser;

vector<T> greater;
copy_if(start_it + 1, end_it, std::back_inserter(greater),
[&amp;](const T&amp; el) { return el >= pivot; });

if (v.size() >= THRESHOLD) {
auto fut1 = std::async([&amp;]() {
filter<T>(std::ref(v), std::ref(lesser), pivot);
quick_sort_async_lim<T>(std::ref(lesser));
});

quick_sort_async_lim(greater);
fut1.wait();

} else {
//problem is too small.
copy_if(start_it, end_it, std::back_inserter(lesser),
[&amp;](const T&amp; el) { return el < pivot; });
quick_sort_async_lim(lesser);
quick_sort_async_lim(greater);
}

std::copy(lesser.begin(), lesser.end(), v.begin());
v[lesser.size()] = pivot;
std::copy(greater.begin(), greater.end(),
v.begin() + lesser.size() + 1);
}


As you can notice, the only addition that this optimized version has is that a new thread is spawned only when the size of the input list is larger than THRESHOLD If the list is too small, then we fall back on the classic single-thread version.
The following pictures show the result for the optimized version above with value of THRESHOLD=4000. As you can notice the execution time drops sensibly w.r.t single thread version. We have achieved ~4x speedup with a minimal programming effort.

We have introduced a new parameter in our code, and we need to figure out what is the best value of THRESHOLD. In order to do so, let's analyze the performance of the code above for various values of the threshold.
The following graph depict the execution time for various values of THRESHOLD. Note that the y-axis is in log scale. The execution time drops quite abruptly from 0 to 300.

Conclusion

We have used std::future to parallelize the quick-sort algorithm. The async code differs slightly from the serial single thread implementation, but runs 4x faster. On the other hand, we have learned that running too many threads it is definitely not a good idea because each thread comes with an overhead: the OS needs to allocate resources and time to manage them.

Introduction

In this lesson we will talk about a way of returning values from threads, more precisely we will talk about std::future which is a mechanism that C++ offers in order to perform asynchronous tasks and query for the result in the future.
A future represents an asynchronous task, i.e. an operation that runs in parallel to the current thread and which the latter can wait (if it needs to) until the former is ready.
You can use a future all the time you need a thread to wait for a one-off event to happen. The thread can check the status of the asynchronous operation by periodically polling the future while still performing other tasks, or it can just wait for the future to become ready.

Introduction

In the previous lesson we have seen how data can be protected using mutex. We now know how to make threads do their work concurrently without messing around with shared resources and data. But sometimes we need to synchronize their actions in time, meaning that we might want a thread t1 to wait until a certain condition is true before allowing it to continue its execution.

This lesson discusses the tools that we can use to achieve such behavior efficiently using condition variables.

In this lesson, we will cover the topic of data sharing and resources between threads. Imagine a scenario where an integer o needs to be modified by two threads t1 and t2. If we are not careful in handling this scenario a data race might occur. But what is a data race exactly?

Data Race

A data race occurs when two or more threads access some shared data and at least one of them is modifying such data. Because the threads are scheduled by OS, and scheduling is not under our control, you do not know upfront which thread is going to access the data first. The final result might depend on the order in which threads are scheduled by the OS.

Race conditions occur typically when an operation, in order to be completed, requires multiple steps or sub-operations, or the modification of multiple data. Since this sub-operations end up being executed by the CPU in different instructions, other threads can potentially mess up with the state of the data while the other's thread operation is still ongoing.

tutorial3

Modern C++ Concurrency - How to use a thread object correctly.

This lesson is going to be more theory focused because we will covers some important fact about how to correctly use the thread object.
For instance we will be talking about how to

1. Pass around threads
2. Have side effect on object passed to thread by reference
3. How to avoid common undefined reference situations
4. How to identify threads uniquely by an id.

tutorial2

Modern C++ Concurrency - Cumulative sum of a vector - Part 2

In this tutorial we will continue the exercice we started out in part-1 and we will:

1. split the work among a number of threads that will be specified by the user via command line
2. perform some benchmarking to see how our code scales as the number of threads increases. We will compare the execution time of the version of the program running on one thread versus the execution time when running on an increasing number of threads.

tutorial1

Modern C++ Concurrency - Cumulative sum of a vector - Part 1

In this tutorial we will write a c++ code that will take as input a large list of numbers and it will return the cumulative sum of them.
In order to speed up the process we will write this code so it uses two threads. In the process we will learn how to use a callable object with the operator() redefined to create and run a thread.

Hello world Concurrency in C++

What is concurrency?

Let’s start off by answering the following question: what is concurrency? Intuitively, concurrency is the execution of operations at the same time. The key part here is at the same time.
Computers are concurrent machines. Nowadays PCs are equipped with several processors and that means that we can exploit all of them at the same time to speed up our software.
Normally an executable runs on a single processor, meaning that at any given time only one of its instructions is executed.
Concurrency is all about executing several instructions at the same time for the same executable.

Hello world code

The C++11 standard introduced a new thread library that allows for standardized programming of concurrent software using threads, but it also offers a bunch of other tools to make concurrent programming safe: synchronization, and atomic operations for instance. Do not worry is any of these words ring a bell now. We will learn a lot about them.

Let’s start whit writing our first concurrent code. Read On…

This article discusses (a problem that I recently solved on codility ).

The core of the problem is the following:
Given two non negative integers N and M, , the task is to check whether they have the same set of prime divisors.
A prime divisor of an integer P is a prime d s.t. for some positive . You are given up to of such queries, and should return the total number of them that evaluates to true.

For instance given if and then our function should return *true* because the set of prime divisor of is equal the
the set of primal divisor of i.e. while for and the function should return *false*.

A Few Words on Polymorphism

Polymorphism is the ability to assign a pointer to a base class to an instance of one its derived class. When a method is invoked on that pointer, the derived implementation, if provided, of the method is called otherwise, the inherited method is. The following is an example of such feature.

class Polygon {
protected:
double width, height;
public:
void Polygon(int a, int b) : width(a), height(b){};
double area() const =0;
int perimeter() const
{ return -1; }
};
};

class Rectangle: public Polygon {
public:
double area()
{ return width*height; }
int perimeter() const{ return width*2 + height*2;};
};

class Triangle: public Polygon {
public:
double area()
{ return width*height/2; }
};

int main () {
Rectangle rect;
Triangle trgl;
Polygon * ppoly1 = &rect;
Polygon * ppoly2 = &trgl;
ppoly1->set_values (4,5);
ppoly2->set_values (4,5);
cout << rect.area() << '\n';
cout << trgl.area() << '\n';
return 0;
}


It is implemented with a cost in term of memory and time. For each class a virtual method table is stored and a pointer to it is added to the definition (transparently, by the compiler) of each class containing a virtual method (or deriving from a class that contains a virtual method). The table in turn contains pointer to the actual implementation of the virtual methods for the derived class. So the compiler only knows the object through a pointer to its base class, but it can still generate the correct code since it can indirect the calls to overridden methods via the virtual method table, then lookup for the method in the table and finally call it. So polymorphism comes at a cost of storing a virtual method table for each class, a pointer to it in each instance of a polymorphic class and two level in indirection when calling a virtual method.
Another pitfall is that since the indirection is required, usually virtual methods cannot be in-lined.

Curious Recurring Template Pattern

The key idea is: polymorphism without extra run-time cost. Static polymorphism.

Templates can mitigated performance problems of dynamic polymorphism via the so called static polymorphism, or simulated dynamic binding. Read On…

Complex number in OpenCL - cl_complex

Recently I've been involved in the developing of a library OpenCAL capable of  parallel execution of cellular automata  and finite differences models.

I though it could have been fun to render some huge fractal with it and so I ended up writing some OpenCL code for the generation of Julia sets. Unfortunately OpenCL does not provide support for complex number (CUDA does, check the following link out: CUDA complex number example)  so I had to write it myself.

The following might be useful to anyone with need of support for complex number operations as exponentiation, argument, modulus etc. in OpenCL.

Here is a link to a 324 Megapixels Julia set Rendered image (warning, size >150 MB)

Weighted Tree Vertex Cover Problem

Vertex cover of graph  is defined as  s.t.  . In other word a subset of the vertices such that all vertices are incident to a vertex in the vertex cover.
We will derive an algorithm for finding the weight of a minimal (yes is not unique) vertex cover for a subclass of graphs i.e. tree (which are acyclic graphs with the property that only one path between each vertex exists).

Remember that vertex cover for graphs is an NP-Complete (NP-hard and NP, hard at least as all NP problems and it is an NP problem itself) problem i.e. no deterministic polynomial tyme algorithm was discovered (if you discover one, contact me, we will be millionaire).

Tree Vertex Cover - Problem Definition

Given a weighted tree  with  write an algorithm for computing a vertex cover with minimum weight i.e. V' is a vertex cover and the sum of the weight of its element is minimal.

The following is the tree structure that we will use throughout the article.

template<typename T,int DEGREE>
struct node{
array<node*,DEGREE> children;

T data;
int weight;

node(const T& v , int _weight) : weight(_weight){
data=v;
mincover=0;
}
};



What if the weight is equal to the degree of the node?

The first observation we can make is that the root node can weather be or not be in the vertex cover. If we include it in the solution then we are sure that all the edges from it to its children have been covered and to solve the problem we need only to compute the cover of its children (which is a simpler problem). Read On…

Programming Interview Question - Merge Intervals

This post will explore the solutions to the question #4 of the famous website cakeinreview (here the link).

Programming Interview Question - Merge Intervals - Problem Statement

Given a list of pair of integers return a list of merged or condensed intervals.

Given for instance the following input list

Your function should take care of corner cases like merging two intervals like the following in . Give a solution first. Then try to solve it in .

Tower of Hanoi - C++

This brief article is about the tower of Hanoi. Wrote this super simple C++ for a student and thought maybe it could be helpful.

It works on the idea that in order to move n disks from pile 1 to 3 we need to first move the first disks to a support pole (choosing the right on is part of the solution, see the code for further detail), then move disk in the correct position, and finally move the first disks from support pole to the correct location. Let the recursion does the magic!

The base case is when we have only one disk to move. Simply move the disk in the correct pile.

Tower of Hanoi - C++ Code

Construct a Binary Tree from its inorder and preorder

This article will investigate the problem of reconstructing a binary tree given its inorder and preorder traversal.

Let's say for instance that we have the following binary tree (see figure)

Binary Tree

which has the following in order and preorder traversal respectively.

Given IN and PRE how can we construct the original tree?

The key idea is to observe that the first element in is the root of the tree and that the same element appears somewhere in , let's say at position . This means that in order traversal has processed k element before to process the root, meaning that the number of nodes of the left tree of the root is . Obviously, if the first element belongs to the left subtree then all the others belongs to the right tree.

We will use this idea to write a recursive algorithm that builds the tree starting from its traversals. The algorithm works as follows: Read On…

List Cycle Detection

Linked list cycle detection problem is a very instructive and fun problem to reason about. This article will state the problem first and then explains how we can solve it efficiently while giving some insight on the underlying math.

A list can gets corrupted and some node can be linked by more than one node, as in the following figure. This could lead to never ending traversal of the list. So it make sense to solve the following problem:

Cicular List

List Cycle Detection - Problem Statement

1. Given  a linked list, detect if the list is circular i.e. contains a cycle
2. Find the starting node of the cycle (the node with two inward arrows in the figure)

The problem is easily solvable in time and space considering that we can visit the list from the head and store in a separate list the visited nodes. As the visit continues we check if the node we are examining was previously visited (for each node we visit we asks the following question: is that node contained in the support list already?). If yes the list is circular and that node is the start point of the cycle. If we reach the tail of the list then the list is not circular.

We can lower the complexity of this approach down to time using a more efficient support (set) data structure (like a tree set). But we can do much better, and the rest of the article will show how to obtain time and space complexity.

List Cycle Detection - Floyd’s algorithm

This algorithm uses the fact that, like clock's hands, things iterating on a cycle at different speed will eventually meet at some point in the future. Consider two runner with velocities respectively, starting running from the same point in a circular stadium. They will meet again when the slower runner reach the starting point for the second time. Why? By the time the slower one has completed half circle the faster has completed a complete cycle and by the time the slower finishes his run, arriving at the starting point again, the faster has completed a second entire cycle.

Things are a bit more complicated in the list cycle detection problem because the iterators (the two runners) do not necessarily start they race from the circular part of the list

Consider two iterators with velocities  respectively. Suppose the cycle has length and that it starts at node number . When the slower iterator reaches the faster is at location . How many iteration it will take before they meet? And at which node?

The situation is described by the following congruence:

which has solution . This means that they will meet after iteration of the slower iterator. This means that they will meet at nodes before the beginning of the cycle and we can use this fact to count nodes from the beginning of the list to deduce the starting point of the cycle. Once the iterators meet in the cycle, we can move the fast iterator back to the beginning of the list and iterate forward one node per step with both iterators until they match again. When we move the fast iterator back at the head of the list, both iterators are nodes away from the beginning of the cycle. Because of this when we move both of them by one,  they will eventually meet exactly at that node .

Let's consider now the case when . This means that by the time the slower iterator reaches the beginning of the cycle the faster one has completed more that a cycle. What will be the starting point for the faster one? We argue that once reaches , is at node but since , this means that it will be at position . We can now use similar argument to the previous case and write:

• since

which has solution . This means that the meeting point is nodes before the beginning of the cycle. If we do the same operations as the previous case, , we obtain the same result. Iterators will meet at the beginning of the cycle. Why? Well advancing makes cycles possibly several times ( remember that  ) and it will clearly stops at .

In other words the slower pointer is at first  at node number . We can write where . After advancing steps it will be at location   Since the result follows.

As an example consider a list with a cycle of length starting at node number . The first part of the algorithm tells us that the nodes will meet at node . Moving the fast pointer back to the head of the list and iterating one node per time both iterators will lead the slower point to node:

• 12 again after advancing of 4 nodes
• 12 again after advancing of 4 nodes
• 10 advancing of the remaining 2 nodes.

The following illustration depict how the algorithm would work on a list of 8 nodes with a cycle of length 4 starting at node number 4. After 5 steps the slow (blue) and fast (red) iterator points to the same node i.e. node number 6.

After that the fast pointer is moved to the head of the list and both iterator are incremented by 1 until they meet again. When they do, they will meet at the beginning of the cycle.

Execution of the Floyd's algorithm on a list of 8 nodes with a cycle of length 4 starting at node 4.

HOW-TO: Dynamic Message of the day

The final result will be something like the following:

It mixes a fun message using fortune and cowsay which you can install using


sudo dnf install fortune-mod cowsay



utility and some informative info about the status of the system as:

• Ram and Swap available and used
• Disk space

The script file can be easily configured and extended to suits the your needs. Colors can also be easily customized.

Distributed  Hadoop and HBase installation - Fedora Linux

In this post I will describe how to get started with the latest versions of hadoop and hbase describing all the step to obtain a working hadoop installation. The steps described here can be easily used to perform a working installation on a large cluster (even tough it can requires additional steps as shared filesystem for instance).

Prerequisites

 sudo dnf install openssh openssh-askpass openssh-clients openssh-server

Don't forget to start the ssh service using the following command:

 sudo service sshd start

Programming Inteview Question

Then if  holds, the entire rows  and column  are set to .

For examples if the following is used as input matrix

4 9 14 19 24
3 8 13 18 23
2 7 12 17 22
1 6 11 16 21
0 5 10 15 20

using  the following  equality predicate (==3) (i.e. returns true if the passed parameter is 3) and  the resulting matrix is:

-1 9 14 19 24
-1 -1 -1 -1 -1
-1 7 12 17 22
-1 6 11 16 21
-1 5 10 15 20

Hint use template for make the procedure as general as possible.

Question: Given a square matrix of size M and type T, rotate the matrix by 90 degrees counterclockwise in place.

For example the algorithm should return the right matrix if is left one is passed as input.

0 1 2 3 4
5 6 7 8 9
10 11 12 13 14
15 16 17 18 19
20 21 22 23 24
4 9 14 19 24
3 8 13 18 23
2 7 12 17 22
1 6 11 16 21
0 5 10 15 20

Programming Interview Question

Question: given two string write a method to decide if one is a permutation of the other.

This is a common question that could be easily solved if we know in advance which is the size of the alphabeth. A straightforward approach could be to sort both strings and then compare them. This approach has complexity  due to sorting. We can do better and lower the complexity to  if we reason as follows: given a boolean sequence of the same size of the alphabet  initially false initialized.  Then what happen if  for each char we found in the  string  we negate the value of the correnspoding bool in the sequence? We end up having the  at position  if the char  appeared an odd number of times and  otherwise. Now if the other string  is a permutation of the first one, we all agree that for each char in  it contains the same number of occurrences. This means that if we apply the same process as before on the boolean sequence using as input the string  each bool is being negated an even number of times and this means that its value would be the same as the beginning of the process (all ). So if the sequence does not contains any true value, than it means that the strings contains the same elements i.e. they are permutation of each other.

Example:

1. "abbccd" ,  "cbdabc" , 
2. Apply negation using s
• 
3. Apply negation using v
• 

A possible C++ implementation is shown here

Programming Intervew Question: Unique Characters in a String

Question: implement an algorithm to determine if a string has all unique characters

The idea behing this (fairly easy) question is that whenever we found that any chars is repeated at least twice we should return false. In order to do that we have to look at each char we are loloking for a solution which complexity is at least . We are free to use any support data structure. is the size of the ASCII charset.

Bird Flocking Simulation

Bird flocking is an extremely interesting natural phenomenon that have been widely studied as witnessed by the number of papers in literature). I will present here a work on aggregate motion of large number of boids in a virtual environment with the presence of predators using CUDA as computational framework.

Beautiful Flocking motion

Collective motion or flocking appears at different fields and scales in nature and several mathematical tools have been developed for analyzing such motions

1. organism are threaten as particles in Brownian motion combined with attraction/repulsion forces
2. Differential equation models
3. Agent based models.

The model presented here is based on a work of Reynolds (1987) which is based on three key behavioral rules:

• Cohesion: to attempt to stay close to nearby flock-mates;
• Collision avoidance: to evade objects that are too close;
• Velocity/Heading Matching: to head in the same direction of nearby flock-mates

and extends it adding predator avoidance and multiple species and bird groups interaction

Bird Flocking Model

The environment is parameterized using the following a set of parameters that describe the size of the virtual environment and the duration of a timestep. What is really interesting is the set of bird's parameter that describe how a bird behave and react to event in its surrounding. Some notable parameters  include the Fyeld Of View (FOV), peak velocity , thrust and others (see figure).

Bird Parameters

The environment is partially observable and the portion of space that is visible to each bird is defined by its FOV. FOV is defined as follows:

Let the position vector of the object in the 's frame of reference, then is 's neighbor if and only if the followings hold:

where is the maximum horizontal range of view, is the maximum vertical range of view and

Cohesion

In formal terms , the bird 's centroid at time , is given by:

Cohesion

which is basically a weighted average of neighbors position.

Separation

A bird try to keep certain distance between itself and
its neighbors. Bird 's separation vector at time is given
by:

where determines how strong is the repulsion against to the neighbor .

Alignment

Bird's alignment is computed as follows

It is a weighted average of the neighbors's heading direction.

Other species and predator avoidance
Other species avoidance is a behavior pretty much similar to the separation. The only difference is that only birds that belong to other species contribute to the result.

Predator avoidance is also a "flee or separation" behavior, but what happens here is that we do not take into account the current predator position, but instead, birds try to "separate" from predators's next position (the prediction is made from current position and velocity and acceleration of the predator).

The predator avoidance vector is defined as follows:

where:

•   is the 's set of predators
•
is the predator avoidance coefficient, where is the minimum distance bird avoid predator.

The model has been implemented in CUDA to speedup the simulation. The following is a short video which I used during my presentation at PDP16 conference.  The model and the implementation described in this article are much more greatly described in the following slides (download here).

Last summer I was invited at CodeWeek 2015 organized by Hacklab CS at UNICAL to talk about Haskell and Functional Programming.

Here the video of my (short) talk. In Italian

Slides Here

Byte Number Conversion

For a project I'm working on these days I was forced to convert raw byte to several numerical values ( to and from binary format).

Bytes are represented in C/C++ as  unsigned char, and std::string are often used as byte buffer as well as old well known raw arrays. I wrote a simple struct that relieve the pain in performing such conversions. It exposes two functions: fromBytes and toBytes.fromBytes takes an additional boolean parameter that takes care of different endianness.

/*
Author: Davide Spataro 2016

Converts byte buffer to type T
*/
typedef unsigned char byte;

template <class T>
struct converter{

static const size_t size = sizeof(T);

union conv{
T value;
byte bytes[ sizeof( T ) ];
} c ;

T fromBytes( const  byte *bytes, bool endianness = false){

if(endianness)
#pragma unroll
for(int i=0;i<size;i++)
c.bytes[size-1-i] = bytes[i];
else
#pragma unroll
for(int i=0;i<size;i++)
c.bytes[i] = bytes[i];

return c.value;
}

byte* toBytes(const T& value,bool endianness = false){
c.value =value;
if(endianness)
reverse();
return c.bytes;
}

void reverse(){
#pragma unroll
for(int i=0;i<size/2;i++){
byte tmp = c.bytes[i];
c.bytes[i]=c.bytes[size-1-i];
c.bytes[size-1-i] = tmp;
}

}

};

template<class T>
void printHex(const T& key, size_t size){
std::cout.setf(std::ios::hex, std::ios::basefield);
for (int i = 0; i < size; i++){ if (i > 0) printf(":");
printf("%02X", (unsigned char)key[i]);

printf("\n");
}



Usage is very simple: let's say for instance you have the following 8 bytes into an unsigned long long

00:00:00:00:00:00:00:5E


converter<int64_t> c;
//binary value is 00:00:00:00:00:00:00:5E (8 bytes)
int64_t res =  c.fromBytes(reinterpret_cast<const unsigned char*>(binary.c_str()),true);
std::cout <<res<< " \n";



this will output clearly output 94.

It works with almost all numerical types (float, all int flavors and floating points)

A great experience, that I hope will became an joyful habit, took place last week when a colleague of mine at the Department of Engineering at University of Edinburgh (Kino) told me about a meeting among the researchers/musicians of the university.

I was invited to play some music, and I obviously accepted! It was great to meet new smart people and musician from all over the world and share with them delicious food and wine. There were several instrument involved in the performances such as viola, violin, cello and voice.

Here two (very) short videos from my performance:

The following was played to fulfill a special request from Alice (a P.hD collegue) from Milan.

Largest Prime Number

Recently a team at the University of Central Missouri, headed by Curtis Cooper has announced, via press release from the Mersenne organization

to have discovered a new largest prime. The number  have more than 22M digits,  to be precise. It's so large that writing 4 digits per centimeter one would be able to cover the entire distance between Edinburgh and Glasgow! Here the full http://www.filedropper.com/largestprimenumber number.

The following is an Haskell micro-script for computing and writing the number to a file (using Haskell because it's lazy input/output allows to write big files to disk without any concern about memory).


import Data.Time
import System.Environment (getArgs)
main :: IO ()
main = do
[path]<-getArgs
let n = 2^74207281 -1
startTime <- getCurrentTime
writeFile path (show n)
stopTime <-getCurrentTime
putStrLn ("ElapsedTime: " ++ show (diffUTCTime stopTime startTime))



Compile using using : ghc --make

and execute passing the output filename as the only parameter.

The full number is ~22MB, not surprisingly as one char occupies one byte in memory.

Here the first digits of the number:

300376418084606182052986098359166050056875863030301484843941693345547723219067 994296893655300772688320448214882399426727835290700904836432218015348199652241 372287684310213386284573666361506667532122772859359864057780256875647795865832 142051171109635844262936572650387240710147982631320437143129112198392188761288 503958771920355017186438665809954286344460536606761717933683749624756782578361 731044883934155387085250868537297205931251606849781532670414744928294883449429 443999003776831072496868250622866039978884541062234219154504645252386846303469 724807334155852889497374778705327594144808269546049745682886662634337786061551 354498294392788969717277814170247857840825173814169979529718831378258156460855 598404801012277963664118162318740241984446339571147500893873350471752282309276 960908368218257475857949333688648781647084935600389442816615101269892941620923 700583920438303155576675128697727353015966198570119971508975499769430113632520 704976596018662818527213338297501690033894692212329648575780270141964029454297 379598752963111110166054910922708870780155972725875622704085120422206985800208 953699779570148521239387340972873010415557408840313517334104245951181312377569 862268931591236073913864912702341514442871893227806578339072908082737776944438 541558625494782239705021522924186805591226430219483495972094802701924328600534 393128646703341368026587734561209964921713257134223641483136379023890310042525 635413014854847842999675719601547926712259803033804208054192341842074795499467 736417866657681142429045674308204219551025449960330608429729874249539051023991 353492744406378092116867003111452756638147874006136238963152211561563090034814 454337404268972669143336589608026262105540337915734652847488347593274189154190 268344381703937005859988258738844104703265786972872467031538046586054465054455 ....

Programming Question: Given an integer, compute its parity(easy)
The parity of an integer is true iff number of set bits (1) is odd, false otherwise. Example: has 5 set bits and hence its parity is true.

Solutions:

Programming Question: Compute the Greatest Common Divisor of two integers (easy)

This question is divided in two parts: you will first asked to write a function that computes the  GCD withouth any space/time constraints. The second part will be more challenching, asking for not using multiplication or division or addition.

Part1: write a function that takes two integers as input and returns their gcd using the famous Euclidean algorithm.

Part 2: compute the Greatest Common Divisor of two integers without using multiplication addition or division operators (easy)

Programming Question: given a 2D array, print it in spiral fashion (easy)

Given a 2D array print it in spiral fashion as shown in the picture.