Transform array such that each element contains the greatest element on its right side

Problem statement

In this article we will be discussing how to solve an easy problem that has been asked at Amazon coding interviews. This is a must do problem for your coding interview preparation. The problem statement goes as follows:

You are given an array A of size N. You have to modify A in place s.t. A[i] = M where M is the maximum value among all elements A[j], j > i. If such element does not exists set A[i] = -1.

Input : A = {2, 3, 1, 9}
Output : A = {9, 9, 9, -1}

Solution

Since this is a quite easy problem, we need to focus on making a good impression on the interviewer by showing a clean reasoning and elegant code implementation of the solution.

Let's start by outlining our solution:

The rightmost element of A will always be -1 There is no element at the right of the rightmost location.

When processing the element at index i all we need to do is substituting it with the maximum element found so far.

We need to make sure that at each iteration we keep track of what is the maximum element so far.

C++ Code

The ideas above can be coded as follows:

void solve(std::vector<int>& V)
{
if(V.size() <=0 )
return;
//max so far
int M = -1;
int i = V.size()-1;
do{
const int m = max(M,V[i]);
V[i] = M;
M = m;
i--;
}while(i >= 0);
}

Another alternative and cleaner implementation is given below:

void solve(std::vector<int>& V)
{
if(V.size() > 0 )
{
V.back() = -1;
for(int i = V.size()-2, M = V.back(); i >=0; i--){
const int m = max(M,V[i]);
V[i] = M;
M = m;
}
}
}

The time and space complexity of the code above is linear in the length of A.

If you liked this article and you want to read more about coding interview preparation, take a look at the previous article of the series.

Sort an array of size N made of numbers from 0 to K

Problem statement

In this article we will be discussing another (see the previous article) very popular programming interview question. This time we are asked to sort an array of size N whose elements are in the range [0,K).

Problem statement

Given an array $A=[a_1,a_2,…a_n],\: 1 \leq a_i \leq K$
modify A s.t. it is sorted.

As usual you should ask your interviewer some clarifying questions. For example a sensible question to ask could be:

What is the maximum value of $K$ and of $N$?
Let's assume that $N \leq 10^7$ and $K\leq 10^3$

First start noticing that the question seems rather simple at first. After all all you are asked to do is to sort an array. This is an easy peasy task that can be accomplished in a one-liner in C++.

Simple sort solution

bool sort_range_K(vector<int>& A)
{
sort(begin(A), end(A));
}

But at this point you should also notice that you are not using the information about the range of the elements, and this should make your eyebrows raise and make you think a bit harder. This means that there is a better way of solving this problem.
The complexity of the above code is of $O(n \times log_2(n))$

Linear time Solution

You might not be aware of this (and if you are not I highly encourage you to get familiar with the topic) that it is possible to achieve better asymptotic complexity for sorting than $O(n \times log_2(n))$.
If you know the range of the elements you are dealing with, sorting can be done in linear time using Counting sort.

The idea behind is really simple. For each number in the range [0,K) you count how many times it appears in the original array in an array F of size K.
W.r.t. to the example above for instance:

A={1,8,9,66,2,1,45,12}
F ={0,2,1,0,0,0,0,0,1,1,....}

F[0] = 0 because 0 appears 0 times in A

F[1] = 2 because 1 appears 2 times in A

$…$

F[9] = 2 because 9 appears 1 times in A

With that information we can easily produce the output array by putting all the zeros first, then the ones, and so on.

C++ code

The idea above can be coded as follows:

template<class T>
void sort_range_K(vector<T>& A, const int K)
{
vector<int> F(K,0);
for(const auto&x : A)
F[x]++;
int p = 0;
for(int i = 0 ; i < K ; i++)
for(int j = 0 ; j < F[i] ; j++)
A[p++] = i;
}

The complexity of the code above is linear the length of $\Theta(A)$.
The space complexity is $\Theta(K)$
You can try the above code online on Wandbox.

Conclusions

In this article we discussed a common coding interview question asked by many tech companies. The takeaways from this exercise are:

If an information is provided in the problem statement, it needs to be used

If you know the range of the element you want to sort, you can sort them in linear time. The space complexity is linear in the biggest possible value an element of the array can take.

If you like this article check the previous article of the series (determine if a number is an Armstrong number).

Coding interview question: Determine if number is an Armstrong number

Problem statement

In this article we will be discussing another (see the previous article counting triples with a given sum) of the most common coding interview questions. The problem statement is really simple and goes as follows:

Given an positive integer N determine if it is an Armstrong number? A number $x_1x_2…x_n$ (1 \leq x_i \leq 9 $ is a digit ) of length $n$ is an Armstrong number if the following is true: $x_nx_{n-1}…x_2x_1 = pow(x_1,n) + pow(x_2,n) + … pow(x_n,n)$

In other words if raising all digits to power $n$ and sum all them up you obtain the original number, then $N$ is an Armstrong number.

In this article we will be discussing one of the most common coding interview question asked in many google interviews, especially during one of the first stages. The problem statement goes as follows:

Given an array N of distinct integers and a integer K, print on the standard output the number of triples (N[i], N[j], N[l]) whose sum is equal to K. In other words how many triples(i,j,k), i < j < k s.t. N[i] + N[j] + N[l] = K are in N?

In this article I will present what are in my opinion the most useful and worth learning BASH shortcuts. Learning this bash cheatsheet can substantially make you more productive and work faster with the terminal. This is not intended to be a full list of all possible BASH shortcuts, and if you need it you better check the bash documentation.

You can also download a printer-friendly BASH cheat sheet PDF that you can print and keep on your desk. You will find a link to it at the end of the article.

Command Editing Shortcuts

Shortcut

Description

Ctrl + a

go to the start of the command line

Ctrl + e

go to the end of the command line

Ctrl + k

delete from cursor to the end of the command line

Ctrl + u

delete from cursor to the start of the command line

Ctrl + w

delete from cursor to start of word (i.e. delete backwards one word)

Ctrl + y

paste word or text that was cut using one of the deletion shortcuts (such as the one above) after the cursor

Ctrl + xx

move between start of command line and current cursor position (and back again)

Alt + b

move backward one word (or go to start of word the cursor is currently on)

Alt + f

move forward one word (or go to end of word the cursor is currently on)

Alt + d

delete to end of word starting at cursor (whole word if cursor is at the beginning of word)

Alt + c

capitalize to end of word starting at cursor (whole word if cursor is at the beginning of word)

Alt + u

make uppercase from cursor to end of word

Alt + l

make lowercase from cursor to end of word

Alt + t

swap current word with previous

Ctrl + f

move forward one character

Ctrl + b

move backward one character

Ctrl + d

delete character under the cursor

Ctrl + h

delete character before the cursor

Ctrl + t

swap character under cursor with the previous one

Command Control Shortcuts

Shortcut

Description

Ctrl + l

clear the screen

Ctrl + s

stops the output to the screen (for long running verbose command)

Ctrl + q

allow output to the screen (if previously stopped using command above)

Ctrl + c

terminate the command

Ctrl + z

suspend/stop the command

Command Recall Shortcuts

Shortcut

Description

Ctrl + r

search the history backwards

Ctrl + g

escape from history searching mode

Ctrl + p

previous command in history (i.e. walk back through the command history)

Ctrl + n

next command in history (i.e. walk forward through the command history)

In this article we are going to discuss an interview question that is asked quite frequently in companies like Facebook, Microsoft, or Google.

The problem statement goes as follows: Given an vector<T> A of elements of type T of length Nand a vector<size_t> P where Pis a permutation of numbers from [0-N-1]you need to apply the permutation Pto the vector A. You are not allowed to use more than O(1) additional space.

For example suppose A=['a','b','c','d'] and P=['2','0','4','1']. After applying P to A, the latter becomes: ['c','a','d','b']. P[i] basically tell you which element of A should go in position i in the final arrangement of A.

Solving this problem can be very easy if you are allowed to use additional O(n) memory. All you need to do is to construct a new array, taking the elements as specified in P. The following code shows how this approach can be implemented in C++

template<typename T>
void apply_permutation(vector<T>& A, const vector<int>& P)
{
vector<T> A1;
A1.reserve(A.size());
for(size_t i = 0 ; i < A.size() ; i++)
A1.push_back(std::move(A[P[i]]));
A = move(A1);
}

Note how we use move which will avoid the copy if possible) of potentially heavy objects. Once the additional array A1 has been constructed, we modify A by moving A1 into it. That is pretty easy stuff.

In order to solve the hardest version we need to work a bit more.

The approach that will lead to the correct solution goes as follows: for each element that it is not yet in the correct location, then move it to its correct location. Sounds easy, and in-fact it it. Let's use the following concrete problem instance example as a guide:

A = [a,b,c,d,e,f]

P = [3,1,0,2,5,4]

For i = 0 we notice that P[i] != 0, meaning that in A[i] we should moveA[3]. In order to achieve so we swap(A[i], A[3]) which results in A=[d, b, c, a, e, f]. Now the first position of the answer is correct. Well done.

But we moved element originally at positioni=0to position 3which is clearly not its correct position ( because according to P, a should be placed in position 2). What do we do? We swap the correct element ( A[P[3]]) with A[3]=a . In other words we substitute the awith the element that should really go in that position i.e. c. After swap(A[3], A[2]), A=[d,b,a,c,e,f]. a is now not at location 2 and P[2] says that the element that should go here is a. good, we found the correct spot for the element originally in position i=0. In the process we also placed many other elements in the correct position.

Ideally now we should process another element that is not in its correct location. But how do we know which one is in the correct location? Since we cannot use any additional memory, we need to be clever and use P instead. What if we set P[i] = iwhenever we do a swap? If we do that then we do not really do nothing for that element because when P[i]= ithere is no work to do, as the element is already in its final location.

If we apply this line of reasoning to all elements of A then the final result will be what we really expected. Basically for an element i not yet in its correct location we keep swapping it until it reaches its final location. If we do it for all elements, then by definition we have solved the problem.

I am sure the idea above will be much more clear once we look at the code: curr and next are respectively the indices of the elements we are considering for a swap.

template<typename T>
void apply_permutation_in_place(vector<T>& A, vector<int>& P)
{
for(size_t i = 0 ; i < A.size() ; i++){
size_t curr = i;
size_t next = P[curr];
while(next != i)
{
swap(A[curr], A[next]);
P[curr] = curr;
curr = next;
next = P[next];
}
P[curr] = curr;
}
}

You can try the above code online on wandbox ( https://wandbox.org/permlink/NevnhZFq2G0cgL7O ).

Command-line argument management is tricky to get right especially when the number of options that we support and the number of their combination is big. For this kind of applications, there are already a number of very effective libraries and tools supporting the programming in such a scenario. One such library is Boost::program_options which I highly encourage you to use as is it awesome.

But there are other cases where we need to quickly write a prototype or when the number of options and possible configuration is small where using Boost could be overkill. In such cases what we usually do is writing ad-hoc command-line options management code which will never be reuse and it is tailored for the specific application at hand. This approach costs time and it is boring once you did it once.

For this reason, I wrote a super simple and minimalistic C++17 command-line argument manager which allows you to:

Check if a set of options has been specified

Get the argument of each element of a set of options (as string).

The class is extremely easy to use. You only need to construct an instance of the class by passing the argc and argv variables and then you are ready to go. No more need for ugly and error prone command line argument parsing. For instance, as you can see from the code snippet below, I check for the existence of the -h or --help options and of all the other required options. If one of -X, -O, -P are not present or its required corresponding argument are not specified a help message is printed. Otherwise, the arguments are retrieved and returned as a vector of strings ready to be used.

The example code is listed below or can be downloaded from here (gist on GitHub). You can also try it live on wandbox.

In this short lesson we will discuss how to parallelize a simple and rather inefficient (because this is not an in-place version) implementation of quick-sort using asynchronous tasks and futures.

We will perform some benchmarking and performance analysis and we will try to understand how we can further improve our implementation.

Quick sort

In this section, I will briefly refresh your memory on quick-sort. I will do so by showing you a simple and self-explicative Haskell version first. We will also write a C++ (serial) version of the same code implementation C++ that we will use as a basis for our parallelization.

It is beautifully simple: in order to sort a list with at least, p one element is only necessary to partition sort the rest of the elements xs into two sublists:

lesser: containing all the elements in xssmaller than p

greater: containing all the elements in xsgreater than p Once both sublists are sorted we can finally return the whole sorted list using by simply returning gluing lesser,pand greatertogether in this order.

If you still have trouble understanding the quick-sort algorithm please refer to Wikipedia.

Quick-sort serial version

The following is the serial C++ implementation of the same idea described above. It should be pretty easy to map the following implementation to the Haskell one. Run it on Wandbox

template <typename T>
void quick_sort_serial(vector<T>& v) {
if (v.size() <= 1) return;
auto start_it = v.begin();
auto end_it = v.end();
const T pivot = *start_it;
//partition the list
vector<T> lesser;
copy_if(start_it, end_it, std::back_inserter(lesser),
[&](const T& el) { return el < pivot; });
vector<T> greater;
copy_if(start_it + 1, end_it, std::back_inserter(greater),
[&](const T& el) { return el >= pivot; });
//solve subproblems
quick_sort_serial(lesser);
quick_sort_serial(greater);
//merge
std::copy(lesser.begin(), lesser.end(), v.begin());
v[lesser.size()] = pivot;
std::copy(greater.begin(), greater.end(),
v.begin() + lesser.size() + 1);
}

Parallelizing Quick-sort using std::future

In order to speed-up things we are going to use the fact that quick-sort is a divide and conquer algorithm. Each subproblem can be solved independently: creating and sorting lesser and greater are two independent tasks. We can easily perform both on different threads.

The following is the first parallel version of the quick_sort_serial() above. Run it on Wandbox

template <typename T>
void filter_less_than(const vector<T>& v, vector<T>& lesser, const int pivot) {
for (const auto el : v) {
if (el < pivot) lesser.push_back(el);
}
}
template <typename T>
void quick_sort_parallel1(vector<T>& v) {
if (v.size() <= 1) return;
auto start_it = v.begin();
auto end_it = v.end();
const T pivot = *start_it;
vector<T> lesser;
auto fut1 = std::async(
std::launch::async,
[&]() {
filter_less_than<T>(std::ref(v), std::ref(lesser), pivot);
quick_sort_parallel1<T>(std::ref(lesser));
});
vector<T> greater;
copy_if(start_it + 1, end_it, std::back_inserter(greater),
[&](const T& el) { return el >= pivot; });
quick_sort_parallel1(greater);
fut1.wait();
std::copy(lesser.begin(), lesser.end(), v.begin());
v[lesser.size()] = pivot;
std::copy(greater.begin(), greater.end(),
v.begin() + lesser.size() + 1);
}

As you can notice, the creation and sorting of lesser and are performed in parallel. Each thread running an instance of quick_sort_parallel1() will create another thread running quick-sort on one of the two sub-problems while the other subproblem is solved by the current thread.

This is exactly what we are doing when we spawn the following async task:
we are creating a task that will populate lesser with all the elements from v less than pivot and, once ready, it will sort it.
Please note that everything we need to have modified by reference need to be wrapped in a std::ref as we discussed in the previous lessons.

The following picture shows how the execution unfolds for the unsorted list: [2,7,1,6,9,5,8,3,4,10]:

The following code shows hot to spawn an async thread solving the lesser subproblem:

While this task is running on the newly created thread, we can solve greater on the current thread.

The asynchronous task will recursively spawn other async tasks until a list of size <=1 is created, which is of course already sorted. There is nothing to do in this case.

Once the main thread is done with sorting the greater list, it waits for the asynchronous task to be ready using the std:.future::wait() function. Once wait returns, both lists are sorted, and we can proceed with merging the result and finally, here it is, we have a sorted list.

Performance analysis

Let's quickly analyze our implementation. We will compare execution time for the single-thread and async-parallel versions above.

Let's start our analysis by taking looking at this graph depicting the execution time (average of 10 runs) for both versions above:

It might be a surprising result to see that the Async parallel version is way slower than the single threaded version, ~55x slower! Why is that? The reason is that, the parallel version creates a new thread for every single subproblem, even for the ones that are quite small. Threads are costly to manage by the OS, they use resources and need to be scheduled. For smaller tasks the overhead caused by the additional thread is larger than the gain in performance that we might get by processing the sublist in parallel. This is exactly what is happening.

In order to solve this issue, we want to modify the async code above so that a new thread is spawned only when the input list v is larger than a certain threshold. The code below implements the aforementioned idea:

template <typename T>
void quick_sort_async_lim(vector<T>& v) {
if (v.size() <= 1) return;
auto start_it = v.begin();
auto end_it = v.end();
const T pivot = *start_it;
vector<T> lesser;
vector<T> greater;
copy_if(start_it + 1, end_it, std::back_inserter(greater),
[&](const T& el) { return el >= pivot; });
if (v.size() >= THRESHOLD) {
auto fut1 = std::async([&]() {
filter<T>(std::ref(v), std::ref(lesser), pivot);
quick_sort_async_lim<T>(std::ref(lesser));
});
quick_sort_async_lim(greater);
fut1.wait();
} else {
//problem is too small.
//Do not create new threads
copy_if(start_it, end_it, std::back_inserter(lesser),
[&](const T& el) { return el < pivot; });
quick_sort_async_lim(lesser);
quick_sort_async_lim(greater);
}
std::copy(lesser.begin(), lesser.end(), v.begin());
v[lesser.size()] = pivot;
std::copy(greater.begin(), greater.end(),
v.begin() + lesser.size() + 1);
}

As you can notice, the only addition that this optimized version has is that a new thread is spawned only when the size of the input list is larger than THRESHOLD If the list is too small, then we fall back on the classic single-thread version. The following pictures show the result for the optimized version above with valueof THRESHOLD=4000. As you can notice the execution time drops sensibly w.r.t single thread version. We have achieved ~4x speedup with a minimal programming effort.

We have introduced a new parameter in our code, and we need to figure out what is the best value of THRESHOLD. In order to do so, let's analyze the performance of the code above for various values of the threshold. The following graph depict the execution time for various values of THRESHOLD. Note that the y-axis is in log scale. The execution time drops quite abruptly from 0to300.

Conclusion

We have used std::future to parallelize the quick-sort algorithm. The async code differs slightly from the serial single thread implementation, but runs 4x faster. On the other hand, we have learned that running too many threads it is definitely not a good idea because each thread comes with an overhead: the OS needs to allocate resources and time to manage them.

In this lesson we will talk about a way of returning values from threads, more precisely we will talk about std::future which is a mechanism that C++ offers in order to perform asynchronous tasks and query for the result in the future. A future represents an asynchronous task, i.e. an operation that runs in parallel to the current thread and which the latter can wait (if it needs to) until the former is ready. You can use a future all the time you need a thread to wait for a one-off event to happen. The thread can check the status of the asynchronous operation by periodically polling the future while still performing other tasks, or it can just wait for the future to become ready.

In the previous lesson we have seen how data can be protected using mutex. We now know how to make threads do their work concurrently without messing around with shared resources and data. But sometimes we need to synchronize their actions in time, meaning that we might want a thread t1 to wait until a certain condition is true before allowing it to continue its execution.

This lesson discusses the tools that we can use to achieve such behavior efficiently using condition variables.

In this lesson, we will cover the topic of data sharing and resources between threads.
Imagine a scenario where an integer o needs to be modified by two threads t1 and t2.
If we are not careful in handling this scenario a data race might occur. But what is a data race exactly?

Data Race

A data race occurs when two or more threads access some shared data and at least one of them is modifying such data. Because the threads are scheduled by OS, and scheduling is not under our control, you do not know upfront which thread is going to access the data first. The final result might depend on the order in which threads are scheduled by the OS.

Race conditions occur typically when an operation, in order to be completed, requires multiple steps or sub-operations, or the modification of multiple data. Since this sub-operations end up being executed by the CPU in different instructions, other threads can potentially mess up with the state of the data while the other's thread operation is still ongoing.