Introduction
In this lesson we will talk about a way of returning values from threads, more precisely we will talk std::future
A future represents an asynchronous task, i.e. an operation that runs in parallel to the current thread and which the latter can wait (if it needs to) until the former is ready.
You can use a future all the time you need a thread to wait for a one-off event to happen. The thread can check the status of the asynchronous operation by periodically polling the future while still performing other tasks, or it can just wait for the future to become ready.
Future
To better understand what a future is, imagine a scenario in which your algorithm has to perform three tasks T1, T2
T3
T1
T2
T3
T1
T2
The thread responsible for the execution of such T1
T2
T2
T1
T3
std::async()
The asynchronous task can be created using std::async
function which returns a future and has the following signature:
template< class Function, class... Args > std::future<std::result_of_t<std::decay_t<Function>(std::decay_t<Args>...)>> async( Function&& f, Args&&... args );
As you can see std::async returns a future of a certain type which depends on the type of Function that we supply to async.
Given a Function f
of type RetType(Type1, Type2,...Typen)
when async
is called as in the following: async(f , arg1, arg2,...,argn);
it will return a future of type std::future<RetType>
. It is not surprising if the future holds the returning value of the Function f
.
Given a get()
ready
An Example
Let's start with the code for the example described above:
#include <thread> #include <future> #include <iostream> #include <chrono> double T1(){ std::cout << "T1 : start" << std::endl; std::this_thread::sleep_for(std::chrono::seconds(5)); std::cout << "T1 : end" << std::endl; return 5.2; } int T2(){ std::cout << "T2 : start" << std::endl; std::this_thread::sleep_for(std::chrono::seconds(5)); std::cout << "T2 : end" << std::endl; return 444; } void T3(double arg1, int arg2){ std::cout << "T3 : start" << std::endl; std::this_thread::sleep_for(std::chrono::seconds(1)); std::cout << "T3 : end" << std::endl; } int main(){ auto start = std::chrono::high_resolution_clock::now(); std::cout << "I'm the main thread: start" << std::endl; { auto future_t1 = std::async(T1); const int res_t2 = T2(); const double res_t1 = future_t1.get(); T3(res_t1, res_t2); } std::cout << "I am the main thread: completed" << std::endl; auto end = std::chrono::high_resolution_clock::now(); std::chrono::duration<double, std::milli> elapsed = end-start; std::cout << "Elapsed time " << elapsed.count() << " ms\n"; return 0; }
As you can see the main
thread uses async to spawn an async task. std::async return immediately an instance of std::future<double>
and the main
continues executing the code for T2()
.
When T2()
is complete, then the main
simply waits for the async task to be completed using the std::future::get()
function.
get()
blocks the calling thread until the moment the future
becomes ready. At that moment it returns the payload, which in this case is a double.
main()
then proceeds with T3()
.
The following is the output that I get from the previous code compiled with clang++7.0.1
on my laptop:
I'm the main thread: start T2 : start T1 : start T2 : end T1 : end T3 : start T3 : end I am the main thread: completed Elapsed time 6000.65 ms
Please note since the total duration is ~6s
this means that the std::async
task actually runs in parallel with the main
.
std::launch
The first thing that std::async(f)
f
get()
Thankfully the standard gives us also the possibility to control where the async tasks std::async
comes with
template< class Function, class... Args > std::future<std::result_of_t<std::decay_t<Function>(std::decay_t<Args>...)>> async( Function & & f, Args & &... args );
The first parameter of this overload is enum std::launch
std::launch
comes with two enum values:
std::launch::async
: when used a new thread is launched to execute the taskasynchronously std::launch::deferred
: when used the task is executed on the calling thread the first time its result is requested. This effectivelymean thatout task will be lazy evaluated (or using other words, executed in a call-by-need manner). The taskstart executing onlywhen get()
gets called on the corresponding future. This means that the task might not be executed at all.
Note that for the overloads async
std::launch
std::launch::async | std::launch::deferred
(both option activated). In this case, the implementation has the right to choose which methods to use.
In auto future_t1 = std::async(std::launch:deferred, T1);
the output that I obtain is the following:
(21:57:13) ○ [knotman@archazzo] I'm the main thread: start T2 : start T2 : end T1 : start T1 : end T3 : start T3 : end I am the main thread: completed Elapsed time 11000.6 ms
Note T1
get()
11s
std::future - other functions
get()
is not the only useful function that std::future()
offers.
The followings are the most important ones:
valid
a boolean function returning true ifthe future
refers to a valid task. It is always true unless you are working on a moved-on and/or default constructed future. Note that calling any of the following function other than:destructor
move-assignment
valid()
itself
when valid()==false
results in undefined behavior.
wait()
similar toget()
but it does not retrieve and consumes the payload of the future. It blocks the execution of the thread until the value is ready.std:.future_status wait_for(const std::chrono::duration<Rep,Period>& timeout_duration )
which works similarly towait()
but it only waits for the time duration specified in its parameter.wait_for
returns as soon as the value is ready or the timeout ends. It returns afuture_status
which is anenum
specifying the status of the async task at that point. It can assume one of the following values:ready
: the payload is readydeferred
: as inlaunch
, states that the value will be ready only when requested for ittimeout
which covers the case whenwait_for
waited for the whole duration of the timeout without ending up having the payload ready.
Conclusion
We have seen how to use the future object that allows for asynchronous computation in C++. During the next lesson we will discuss how we can use the information we discussed here in order to implement a quick-sort version that uses futures to speed up its execution.