To show this in action let’s implement the equivalent of the bread and jam shopping list. It is a truth drummed into every programmer by experience that if you have a for loop going in the positive direction then it is often perfectly safe to access the results of earlier iterations.
That is, if you have reached index in the loop it is OK to look at the result of index-1, or index-2 and so on, but this isn’t the case in a Parallel.For.
Change the simple for loop to:
bool flag = true; for (int i = 0; i < size; i++){ data[i] = 1; if (data[new Random().Next(i)] == 0.0) flag = false; } MessageBox.Show(flag.ToString());
The second statement in the loop checks to see if an earlier element in the array is zero. The method Random().Next(i) generates a random integer in the range 0 to less than i. As the loop progresses from data[0] to data[size] then when you reach data[i] all array elements from data[0] up to data[i] have been set to a non-zero value i.e. 1. That is, if you run this loop you are guaranteed to see flag
= true.
The same algorithm used with a Parallel.For is much trickier. Consider:
Parallel.For(0, size, i => { data[i] = 1; if (data[new Random().Next(i)] == 0.0) flag = false; }); MessageBox.Show(flag.ToString());
In this case the order in which the elements are processed isn’t guaranteed and in most cases there will be an earlier element that hasn’t yet been processed and so the result is usually flag
= false. However, notice that the actual situation is much worse than simply getting a different answer. It is possible that the order of evaluation will be such that all of the tested elements are non-zero and in this case the result will, very occasionally, be flag
= true and so the outcome isn’t even predictable – it’s a race condition and this makes it non-deterministic. The result of the program can vary each time you run it as if its outcome was random.
No Fix!
The random nature of a program plagued by a race condition is often put down to an intermittent hardware fault, but notice that there is no easy remedy for this problem. It isn't a problem due to locking or the use of a shared resource or any other fixable problem. The algorithm that you are trying to implement needs to access the data in a particular order and a parallel implementation doesn't promise to evaluate in any particular order which means that the algorithm is inherently not parallel. You might be able to re-cast the algorithm in form that is suitable for parallel implementation but as its stands it just doesn't work.
This order of evaluation problem is, of course, the reason why you only get a forward implementation of Parallel.For,
i.e. 1,2,3 4... and not a reverse form of a loop, i.e. ...4,3,2,1 As the order of evaluation isn't fixed, it would be a nonsense to let you specify it in reverse order. Any algorithm that requires a loop with a particular order isn't inherently parallel.
The only type of Parallel.For that is free of this problem is one that doesn't manipulate the index, i.e. all expressions are indexed by i and not, say, i+1 or i-3, etc.
Notice that this doesn’t mean that the Parallel.For is useless or even very primitive. Parallel.For is doing a great deal of work on your behalf. It is choosing how to partition the loop, creating the threads necessary, running the partitions one per thread and making sure that everything waits until the final thread completes the task. If you were to try to implement it from scratch you would need to generate a lot of code. This is a great simplification over the DIY version and Parallel.For is well worth using. But at the end of the day you are still using separate threads to run different portions of the for loop and as such all of the usual problems arise and it is up to you to keep them under control.
A simple-minded approach is to restrict the use of Parallel.For to tasks that are truly isolated from one another and never do anything adventurous, but that would simply be missing an opportunity. There are algorithms that can be implemented using a Parallel.For, but they need some attention to resource sharing in the same way that a raw threads solution would. It is worth mentioning that the Parallel.ForEach loop suffers from the same problems and so does Parallel.Invoke, which runs an array of Action delegates possibly in parallel.
Postlude
Making concurrency easier is a good thing, but with it comes the usual set of new difficulties that we always encounter when multiple threads of execution are employed. The real danger is that, unless you know better, you can be led into believing that somehow the new features solve the old problems – they don’t.
The Intel 8080 was the very first multi-purpose microprocessor and as such played a pivotal role in the evolution of personal computing. 2024 was the 50th anniversary of the chip that influenced [ ... ]
Flutter 3.27 has been released with updates to the framework, engine, and ecosystem, including progress with Impeller and improvements to Cupertino widgets. The new version also has new features in De [ ... ]