threads

Modern C++ concurrency - parallel quick-sort with std::future

In this short lesson we will discuss how to parallelize a simple and rather inefficient (because this is not an in-place version) implementation of quick-sort using asynchronous tasks and futures. We will perform some benchmarking and performance analysis and we will try to understand how we can further improve our implementation. Quick sort In this section, I will briefly refresh your memory on quick-sort. I will do so by showing you a simple and self-explicative Haskell version first. We will also write a C++ (serial) version of the same code implementation C++ that we will use as a basis for our parallelization. Here it goes… Read More »Modern C++ concurrency - parallel quick-sort with std::future

Modern C++ concurrency - Returning values from Threads - std::future

Introduction

In this lesson we will talk about a way of returning values from threads, more precisely we will talk about std::future which is a mechanism that C++ offers in order to perform asynchronous tasks and query for the result in the future.
A future represents an asynchronous task, i.e. an operation that runs in parallel to the current thread and which the latter can wait (if it needs to) until the former is ready.
You can use a future all the time you need a thread to wait for a one-off event to happen. The thread can check the status of the asynchronous operation by periodically polling the future while still performing other tasks, or it can just wait for the future to become ready.

Read More »Modern C++ concurrency - Returning values from Threads - std::future

Modern C++ Concurrency - Synchronizing threads - Condition Variables

Introduction

In the previous lesson we have seen how data can be protected using mutex. We now know how to make threads do their work concurrently without messing around with shared resources and data. But sometimes we need to synchronize their actions in time, meaning that we might want a thread t1 to wait until a certain condition is true before allowing it to continue its execution.

This lesson discusses the tools that we can use to achieve such behavior efficiently using condition variables.

Read More »Modern C++ Concurrency - Synchronizing threads - Condition Variables

Modern C++ Concurrency - How to share data and resources between threads

In this lesson, we will cover the topic of data sharing and resources between threads. Imagine a scenario where an integer o needs to be modified by two threads t1 and t2. If we are not careful in handling this scenario a data race might occur. But what is a data race exactly?

Data Race

A data race occurs when two or more threads access some shared data and at least one of them is modifying such data. Because the threads are scheduled by OS, and scheduling is not under our control, you do not know upfront which thread is going to access the data first. The final result might depend on the order in which threads are scheduled by the OS.

Race conditions occur typically when an operation, in order to be completed, requires multiple steps or sub-operations, or the modification of multiple data. Since this sub-operations end up being executed by the CPU in different instructions, other threads can potentially mess up with the state of the data while the other's thread operation is still ongoing.

Read More »Modern C++ Concurrency - How to share data and resources between threads

Modern C++ Concurrency - How to use a thread object correctly and common pitfalls

tutorial3