ESP32 In MicroPython: Asyncio
Written by Harry Fairhead & Mike James   
Monday, 09 September 2024
Article Index
ESP32 In MicroPython: Asyncio
Awaiting Sleep
Sequential and Concurrent
Shared Variables and Locks
Using uasyncio

Shared Variables and Locks

Coroutines share global variables and have their own local variables as is the case for functions. If you are not used to asynchronous programming this can have some surprising consequences. The problem is that access to global resources by more than one task carries the risk of a race condition. For example, if two tasks attempt to update a resource and one is part way through an update when the other starts and begins its own update then the final outcome depends on which task gets to complete its update last. This is a “race condition”. Given that uasyncio implements a form of asynchronous programming that only starts another Task if the currently running Task gives up the thread, i.e. it voluntarily allows another Task to start, this is far less of a problem. You can avoid it altogether by always making sure that any Task only gives up the thread when any use of a shared resources is complete. However, as hardware-oriented programs of the sort you run on the ESP32 tend to use shared hardware resources, this is more of a problem than in other situations. The solution is to use a lock of one sort or another so as to restrict access to the shared resource to one task at a time.

The uasyncio module contains asynchronous equivalents for most of the standard threading locks:

  • Lock
    The Lock object has three methods that control the way that tasks interact with it:

    lock.locked() Returns True if locked

    lock.acquire() Waits for lock to be unlocked and then locks it

    lock.release() Unlocks the lock

The basic idea is that all of the tasks that want to access a shared resource follow the protocol that they first have to acquire the Lock object that is protecting it by using acquire(). If another task has already acquired the lock then subsequent attempts to acquire it suspend the task until the lock is released. When the lock is released one of the tasks waiting to acquire it is allowed to run. This means that only one task accesses the shared resource at a time and other tasks queue up to use it.

  • Event

    The Event object has four methods:

    is_set() True if the event is set and False otherwise

    set() Sets the event, any waiting tasks can now run

    clear() Clears the event

    wait() Waits for the event to be set

The Event object is intended to be used to synchronize tasks. Any number of tasks can wait on an event and then any other task can set the event and allow the waiting tasks to be scheduled to run when the thread is free. For example, a set of tasks might process a file that is downloaded by another task. The downloading task can set the event to signal to the processing tasks that the data is ready to process.

  • ThreadSafeFlag

The ThreadSafeFlag has three methods:

set() Sets the flag

clear Clears the flag

wait Waits for the flag to be set.

ThreadSafeFlag works like the Event object, but it can be used by functions that are not coroutines such as interrupt handlers.

The whole subject of locks and how to use them is complex and if you want to know more see Programmer’s Python: Async, ISBN:9781871962765. However, you need to be aware of the two big problems in using locks. The first is that they slow things down. Locks are slow to use and restrict access often unnecessarily. The second is the potential for deadlock – where one task is waiting on a lock that another holds while it is waiting for a lock that the first task is holding.

Consider the following example based on a simple counter updating a global variable, myCounter:

import uasyncio
async def count():
    global myCounter
    for i in range(1000):
        temp = myCounter+1
        await uasyncio.sleep(0)
        myCounter = temp
async def main():
    await uasyncio.gather(count(),count())
    print(myCounter)
myCounter=0
uasyncio.run(main())

Each task updates myCounter a thousand times and so the total should be 2000, but if you run the program you will find that it is 1000. Where have the other thousand updates gone?

Both t1 and t2 release the main thread in the middle of the update of the global variable. As a result each task updates myCounter at exactly the same time and as a result there is a perfect race condition on every update and the program displays 1000.

The simplest solution to this problem is not to release the main thread in the middle of an operation. As long as the task doesn’t release the main thread it is an atomic operation. This is usually one of the benefits of using single-threaded multi-tasking.

If this approach cannot be used then there is no alternative but to add a lock. The uasyncio module provides its own locks. Rather than having to explicitly call acquire and release we can use “async with”. This acquires the lock on entry to the block and automatically releases it on exit. This can only be used in a coroutine and can be suspended during the enter and exit phase:

import uasyncio
async def count():
    global myCounter
    global myLock
    for i in range(1000):
        async with myLock:
            temp=myCounter+1
            await uasyncio.sleep(0)
            myCounter=temp
async def main():
    await uasyncio.gather(count(),count())
    print(myCounter)
myCounter=0
myLock=uasyncio.Lock()
uasyncio.run(main())

Now t2 has to wait until t1 releases the lock before it can continue. Notice the use of “async with” rather than just “with”. The program now displays 2000. In this case the problem has been caused deliberately, but when you are using coroutines there are occasions that you cannot modify in which locking is the only option.



Last Updated ( Tuesday, 10 September 2024 )