Archive for the ‘Programming Interview’ Category

The most asked coding interview - #5

Transform array such that each element contains the greatest element on its right side

Problem statement

In this article we will be discussing how to solve an easy problem that has been asked at Amazon coding interviews. This is a must do problem for your coding interview preparation. The problem statement goes as follows:

You are given an array A of size N. You have to modify A in place s.t. A[i] = M where M is the maximum value among all elements A[j], j > i. If such element does not exists set A[i] = -1.

Example

Input :  A = {15, 22, 12, 13, 12, 19, 0, 2}
Output : A = {22, 19, 19, 19, 19, 2, 2, -1}
Input :  A = {2, 3, 1, 9}
Output : A = {9, 9, 9, -1}

Solution

Since this is a quite easy problem, we need to focus on making a good impression on the interviewer by showing a clean reasoning and elegant code implementation of the solution.

Let's start by outlining our solution:

  1. The rightmost element of A will always be -1 There is no element at the right of the rightmost location.
  2. When processing the element at index i all we need to do is substituting it with the maximum element found so far.
  3. We need to make sure that at each iteration we keep track of what is the maximum element so far.

C++ Code

The ideas above can be coded as follows:

void solve(std::vector<int>&amp; V)
{
    if(V.size() <=0 )
        return;
    //max so far
    int M = -1;
    int i = V.size()-1;
    do{
        const int m = max(M,V[i]);
        V[i] = M;
        M = m;
        i--;
    }while(i >= 0);
}

Another alternative and cleaner implementation is given below:

void solve(std::vector<int>&amp; V)
{
    if(V.size() > 0 )
    {       
        V.back() = -1;
        for(int i = V.size()-2, M = V.back(); i >=0; i--){
            const int m = max(M,V[i]);
            V[i] = M;
            M = m;
        }
    }
}

The time and space complexity of the code above is linear in the length of A.


If you liked this article and you want to read more about coding interview preparation, take a look at the previous article of the series.

The most asked coding interview - #4

Return the Maximum absolute difference of values and indices

In this article we will be discussing one of the most asked coding interview question asked at Amazon.
The problem statement goes as follows:

Problem statement

Given an array A determine the maximum possible value of the following expression:
$f(i,j) =| A[i]-A[j] | +|i-j|$

Example

Input : A = {1, 3, -1} 
Output : 5
because the maximum value obtainable is:
=> f(1,2) = |3 - (-1)| + |1-2| = 5
Input : A={3, -2, 5, -4}
Output : 10
=> f(1, 4) = f(4, 1) = |3 - (-4)| + |1 - 4| = 10
Note that in this case we can also obtain 10 from
=> f(3, 4) = f(4, 3) = |5 - (-4)| + |3 - 4| = 10
Read On…

The most asked programming interview questions - #3

Sort an array of size N made of numbers from 0 to K

Problem statement

In this article we will be discussing another (see the previous article) very popular programming interview question. This time we are asked to sort an array of size N whose elements are in the range [0,K).

Problem statement

Given an array $A=[a_1,a_2,…a_n],\: 1 \leq a_i \leq K$
modify A s.t. it is sorted.

As usual you should ask your interviewer some clarifying questions. For example a sensible question to ask could be:

  • What is the maximum value of $K$ and of $N$? Let's assume that $N \leq 10^7$ and $K\leq 10^3$

Example

Input : A={1,8,9,66,2,1,45,12}
Output : A={1,1,2,8,9,12,45,66}

Solution

First start noticing that the question seems rather simple at first. After all all you are asked to do is to sort an array. This is an easy peasy task that can be accomplished in a one-liner in C++.

Simple sort solution

bool sort_range_K(vector<int>&amp; A)
{
  sort(begin(A), end(A));   
}

But at this point you should also notice that you are not using the information about the range of the elements, and this should make your eyebrows raise and make you think a bit harder. This means that there is a better way of solving this problem.
The complexity of the above code is of $O(n \times log_2(n))$

Linear time Solution

You might not be aware of this (and if you are not I highly encourage you to get familiar with the topic) that it is possible to achieve better asymptotic complexity for sorting than $O(n \times log_2(n))$.
If you know the range of the elements you are dealing with, sorting can be done in linear time using Counting sort.

The idea behind is really simple. For each number in the range [0,K) you count how many times it appears in the original array in an array F of size K.
W.r.t. to the example above for instance:

A={1,8,9,66,2,1,45,12}
F ={0,2,1,0,0,0,0,0,1,1,....}
  • F[0] = 0 because 0 appears 0 times in A
  • F[1] = 2 because 1 appears 2 times in A
  • $…$
  • F[9] = 2 because 9 appears 1 times in A

With that information we can easily produce the output array by putting all the zeros first, then the ones, and so on.

C++ code

The idea above can be coded as follows:

template<class T>
void sort_range_K(vector<T>&amp; A, const int K)
{
    vector<int> F(K,0);
    for(const auto&amp;x : A)
        F[x]++;

    int p = 0;
    for(int i = 0 ; i < K ; i++)
        for(int j = 0 ; j < F[i] ; j++)
            A[p++] = i;

}

The complexity of the code above is linear the length of $\Theta(A)$.
The space complexity is $\Theta(K)$
You can try the above code online on Wandbox.


Conclusions

In this article we discussed a common coding interview question asked by many tech companies. The takeaways from this exercise are:

  • If an information is provided in the problem statement, it needs to be used
  • If you know the range of the element you want to sort, you can sort them in linear time. The space complexity is linear in the biggest possible value an element of the array can take.

If you like this article check the previous article of the series (determine if a number is an Armstrong number).

The most asked coding interview - #2

Coding interview question: Determine if number is an Armstrong number

Problem statement

In this article we will be discussing another (see the previous article counting triples with a given sum) of the most common coding interview questions. The problem statement is really simple and goes as follows:

Given an positive integer N determine if it is an Armstrong number?
A number $x_1x_2…x_n$ (1 \leq x_i \leq 9 $ is a digit ) of length $n$ is an Armstrong number if the following is true:
$x_nx_{n-1}…x_2x_1 = pow(x_1,n) + pow(x_2,n) + … pow(x_n,n)$

In other words if raising all digits to power $n$ and sum all them up you obtain the original number, then $N$ is an Armstrong number.

Read On…

The 10most asked coding interview at Google - #1

Coding interview at google

Problem statement

In this article we will be discussing one of the most common coding interview question asked in many google interviews, especially during one of the first stages. The problem statement goes as follows:

Given an array N of distinct integers and a integer K, print on the standard output the number of triples (N[i], N[j], N[l]) whose sum is equal to K. In other words how many triples(i,j,k), i < j < k s.t. N[i] + N[j] + N[l] = K are in N?

Read On…

Modern C++ concurrency - parallel quick-sort with std::future

In this short lesson we will discuss how to parallelize a simple and rather inefficient (because this is not an in-place version) implementation of quick-sort using asynchronous tasks and futures.

We will perform some benchmarking and performance analysis and we will try to understand how we can further improve our implementation.

Quick sort

In this section, I will briefly refresh your memory on quick-sort. I will do so by showing you a simple and self-explicative Haskell version first. We will also write a C++ (serial) version of the same code implementation C++ that we will use as a basis for our parallelization.

Here it goes the Haskell version:

quicksort [] = []
quicksort (p:xs) = (quicksort lesser) ++ [p] ++ (quicksort greater)
    where
        lesser = filter (< p) xs
        greater = filter (>= p) xs

It is beautifully simple: in order to sort a list with at least, p one element is only necessary to partition sort the rest of the elements xs into two sublists:

  • lesser: containing all the elements in xs smaller than p
  • greater: containing all the elements in xs greater than p
    Once both sublists are sorted we can finally return the whole sorted list using by simply returning gluing lesser, p and greater together in this order.

If you still have trouble understanding the quick-sort algorithm please refer to Wikipedia.

Quick-sort serial version

The following is the serial C++ implementation of the same idea described above. It should be pretty easy to map the following implementation to the Haskell one. Run it on Wandbox

template <typename T>
void quick_sort_serial(vector<T>&amp; v) {
  if (v.size() <= 1) return;
  auto start_it = v.begin();
  auto end_it = v.end();

  const T pivot = *start_it;

//partition the list
  vector<T> lesser;
  copy_if(start_it, end_it, std::back_inserter(lesser),
          [&amp;](const T&amp; el) { return el < pivot; });

  vector<T> greater;
  copy_if(start_it + 1, end_it, std::back_inserter(greater),
          [&amp;](const T&amp; el) { return el >= pivot; });

//solve subproblems
  quick_sort_serial(lesser);
  quick_sort_serial(greater);

//merge
  std::copy(lesser.begin(), lesser.end(), v.begin());
  v[lesser.size()] = pivot;
  std::copy(greater.begin(), greater.end(),
            v.begin() + lesser.size() + 1);
}

Parallelizing Quick-sort using std::future

In order to speed-up things we are going to use the fact that quick-sort is a divide and conquer algorithm. Each subproblem can be solved independently:
creating and sorting lesser and greater are two independent tasks. We can easily perform both on different threads.

The following is the first parallel version of the quick_sort_serial() above.
Run it on Wandbox

template <typename T>
void filter_less_than(const vector<T>&amp; v, vector<T>&amp; lesser, const int pivot) {
  for (const auto el : v) {
    if (el < pivot) lesser.push_back(el);
  }
}

template <typename T>
void quick_sort_parallel1(vector<T>&amp; v) {
  if (v.size() <= 1) return;
  auto start_it = v.begin();
  auto end_it = v.end();

  const T pivot = *start_it;
  vector<T> lesser;
  auto fut1 = std::async(
        std::launch::async, 
        [&amp;]() {
            filter_less_than<T>(std::ref(v), std::ref(lesser), pivot);
            quick_sort_parallel1<T>(std::ref(lesser));
  });

  vector<T> greater;
  copy_if(start_it + 1, end_it, std::back_inserter(greater),
          [&amp;](const T&amp; el) { return el >= pivot; });

  quick_sort_parallel1(greater);

  fut1.wait();

  std::copy(lesser.begin(), lesser.end(), v.begin());
  v[lesser.size()] = pivot;
  std::copy(greater.begin(), greater.end(),
            v.begin() + lesser.size() + 1);
}

As you can notice, the creation and sorting of lesser and are performed in parallel. Each thread running an instance of quick_sort_parallel1() will create another thread running quick-sort on one of the two sub-problems while the other subproblem is solved by the current thread.

This is exactly what we are doing when we spawn the following async task:
we are creating a task that will populate lesser with all the elements from v less than pivot and, once ready, it will sort it.
Please note that everything we need to have modified by reference need to be wrapped in a std::ref as we discussed in the previous lessons.

The following picture shows how the execution unfolds for the unsorted list: [2,7,1,6,9,5,8,3,4,10]:

The following code shows hot to spawn an async thread solving the lesser subproblem:

  vector<T> lesser;
  auto fut1 = std::async([&amp;]() {
    filter<T>(std::ref(v), std::ref(lesser), pivot);
    quick_sort<T>(std::ref(lesser));
  });

While this task is running on the newly created thread, we can solve greater on the current thread.

The asynchronous task will recursively spawn other async tasks until a list of size <=1 is created, which is of course already sorted. There is nothing to do in this case.

Once the main thread is done with sorting the greater list, it waits for the asynchronous task to be ready using the std:.future::wait() function.
Once wait returns, both lists are sorted, and we can proceed with merging the result and finally, here it is, we have a sorted list.

Performance analysis

Let's quickly analyze our implementation. We will compare execution time for the single-thread and async-parallel versions above.

Let's start our analysis by taking looking at this graph depicting the execution time (average of 10 runs) for both versions above:

It might be a surprising result to see that the Async parallel version is way slower than the single threaded version, ~55x slower!
Why is that? The reason is that, the parallel version creates a new thread for every single subproblem, even for the ones that are quite small.
Threads are costly to manage by the OS, they use resources and need to be scheduled. For smaller tasks the overhead caused by the additional thread is larger than the gain in performance that we might get by processing the sublist in parallel. This is exactly what is happening.

In order to solve this issue, we want to modify the async code above so that a new thread is spawned only when the input list v is larger than a certain threshold. The code below implements the aforementioned idea:

template <typename T>
void quick_sort_async_lim(vector<T>&amp; v) {
  if (v.size() <= 1) return;
  auto start_it = v.begin();
  auto end_it = v.end();

  const T pivot = *start_it;
  vector<T> lesser;

  vector<T> greater;
  copy_if(start_it + 1, end_it, std::back_inserter(greater),
          [&amp;](const T&amp; el) { return el >= pivot; });

  if (v.size() >= THRESHOLD) {
    auto fut1 = std::async([&amp;]() {
      filter<T>(std::ref(v), std::ref(lesser), pivot);
      quick_sort_async_lim<T>(std::ref(lesser));
    });

    quick_sort_async_lim(greater);
    fut1.wait();

  } else {
    //problem is too small.
    //Do not create new threads
    copy_if(start_it, end_it, std::back_inserter(lesser),
            [&amp;](const T&amp; el) { return el < pivot; });
    quick_sort_async_lim(lesser);
    quick_sort_async_lim(greater);
  }

  std::copy(lesser.begin(), lesser.end(), v.begin());
  v[lesser.size()] = pivot;
  std::copy(greater.begin(), greater.end(),
            v.begin() + lesser.size() + 1);
}

As you can notice, the only addition that this optimized version has is that a new thread is spawned only when the size of the input list is larger than THRESHOLD If the list is too small, then we fall back on the classic single-thread version.
The following pictures show the result for the optimized version above with value of THRESHOLD=4000. As you can notice the execution time drops sensibly w.r.t single thread version. We have achieved ~4x speedup with a minimal programming effort.

We have introduced a new parameter in our code, and we need to figure out what is the best value of THRESHOLD. In order to do so, let's analyze the performance of the code above for various values of the threshold.
The following graph depict the execution time for various values of THRESHOLD. Note that the y-axis is in log scale. The execution time drops quite abruptly from 0 to 300.

Conclusion

We have used std::future to parallelize the quick-sort algorithm. The async code differs slightly from the serial single thread implementation, but runs 4x faster. On the other hand, we have learned that running too many threads it is definitely not a good idea because each thread comes with an overhead: the OS needs to allocate resources and time to manage them.

Modern C++ concurrency - Returning values from Threads - std::future

Introduction

In this lesson we will talk about a way of returning values from threads, more precisely we will talk about std::future which is a mechanism that C++ offers in order to perform asynchronous tasks and query for the result in the future.
A future represents an asynchronous task, i.e. an operation that runs in parallel to the current thread and which the latter can wait (if it needs to) until the former is ready.
You can use a future all the time you need a thread to wait for a one-off event to happen. The thread can check the status of the asynchronous operation by periodically polling the future while still performing other tasks, or it can just wait for the future to become ready.

Read On…

Modern C++ Concurrency - Synchronizing threads - Condition Variables

Introduction

In the previous lesson we have seen how data can be protected using mutex. We now know how to make threads do their work concurrently without messing around with shared resources and data. But sometimes we need to synchronize their actions in time, meaning that we might want a thread t1 to wait until a certain condition is true before allowing it to continue its execution.

This lesson discusses the tools that we can use to achieve such behavior efficiently using condition variables.

Read On…

Solution to the Codility Common Prime divisors Set Problem

This article discusses (a problem that I recently solved on codility ).

The core of the problem is the following:
Given two non negative integers N and M, 1 \leq M \leq N \leq 2147483647, the task is to check whether they have the same set of prime divisors.
A prime divisor of an integer P is a prime d s.t. d \times k = P for some positive k. You are given up to 6 \times 10^3 of such queries, and should return the total number of them that evaluates to true.

For instance given if N = 156 and M = 78 then our function should return *true* because the set of prime divisor of N is equal the
the set of primal divisor of M i.e. \{2,13,3\} while for N=45 and M=120 the function should return *false*.

Read On…

Tree Vertex Cover Problem

Weighted Tree Vertex Cover Problem

Vertex cover of graph is defined as s.t. . In other word a subset of the vertices such that all vertices are incident to a vertex in the vertex cover.
We will derive an algorithm for finding the weight of a minimal (yes is not unique) vertex cover for a subclass of graphs i.e. tree (which are acyclic graphs with the property that only one path between each vertex exists).

Remember that vertex cover for graphs is an NP-Complete (NP-hard and NP, hard at least as all NP problems and it is an NP problem itself) problem i.e. no deterministic polynomial tyme algorithm was discovered (if you discover one, contact me, we will be millionaire).

Tree Vertex Cover - Problem Definition

Given a weighted tree with write an algorithm for computing a vertex cover with minimum weight i.e. V' is a vertex cover and the sum of the weight of its element is minimal.

The following is the tree structure that we will use throughout the article.

template<typename T,int DEGREE>
struct node{
       array<node*,DEGREE> children;

       T data;
       int weight;

        node(const T& v , int _weight) : weight(_weight){
               data=v;
               mincover=0;
       }
};

What if the weight is equal to the degree of the node?

The first observation we can make is that the root node can weather be or not be in the vertex cover. If we include it in the solution then we are sure that all the edges from it to its children have been covered and to solve the problem we need only to compute the cover of its children (which is a simpler problem). Read On…