[Python Patterns] Fun with Asyncio in Python 3.8

Here are some examples of using the Asyncio module in Python 3.8

Here are some examples of using the Asyncio module in Python 3.8
[Note: I am now on Python 3.9.5]

The testing is done hitting the SWAPI as it is public and I can just run through a range() of numbers. Go to https://swapi.dev/documentation for more info.

First the OG way with the Requests module

This is my bread-and-butter-go-out-and-get-things-done way of working with API's on the Internet.  As you can see I go and grab a character in a serial fashion one by one.

It averages about 6 seconds calling the API individually and then printing out the information.

CODE

    import requests
    import time


    def main():
        start_time = time.time()

        print('OG requests')

        # We want to get people 1 through 15
        PEOPLE = 16

        url = 'https://swapi.co/api/people/{}'

        for i in range(1, PEOPLE):
            response = requests.get(url=url.format(i))
            data = response.json()
            print("Character: {0}, Birth Year: {1}".format(
                data['name'], data['birth_year']))

        print('The script took {0} second !'.format(time.time() - start_time))


    if __name__ == '__main__':
        main()

Async: As Completed

This code here creates a list of tasks to run, then pushes them to the event loop, then it gives the results as they complete.

It averages about 0.8 seconds calling the API, Nifty!

CODE

import aiohttp
import asyncio
import time


async def fetch(session, url):
    # Get the goods
    async with session.get(url) as response:
        return await response.json()


async def main():
    start_time = time.time()

    # ##################################################
    # This prints results as they complete

    print('Async as_completed')

    url = 'https://swapi.dev/api/people/{}'

    tasks = []
    async with aiohttp.ClientSession() as session:
        for i in range(1, 10):
            tasks.append(fetch(session, url.format(i)))

        for future in asyncio.as_completed(tasks):
            result = await future
            print(f'{result["name"]}')
            print(f'{result["birth_year"]}')
            print(f'{result["gender"]}')

    script_time = time.time() - start_time
    print(f'The script took {script_time} second !')


if __name__ == '__main__':
    asyncio.run(main())

Async: Gather

This code does the same as_completed, but the difference is that it waits for all the tasks to complete, then iterates through the data to provide the data we'd like

It averages about 0.9 seconds calling the API, Good!

CODE

import aiohttp
import asyncio
import time


async def fetch(session, url):
    # Get the goods
    async with session.get(url) as response:
        return await response.json()


async def main():
    start_time = time.time()

    # ##################################################
    # This prints results after they are done

    print('Async gather')

    url = 'https://swapi.dev/api/people/{}'

    tasks = []
    async with aiohttp.ClientSession() as session:
        for i in range(1, 10):
            tasks.append(fetch(session, url.format(i)))

        results = await asyncio.gather(*tasks)
        # print(results)
        for result in results:
            print(f'{result["name"]}')
            print(f'{result["birth_year"]}')
            print(f'{result["gender"]}')

    script_time = time.time() - start_time
    print(f'The script took {script_time} second !')


if __name__ == '__main__':
    asyncio.run(main())

Async: Pools

This code creates a Semaphore pool to get the requests, and finally uses the gather pattern to return the data.

It averages about 1 second calling the API, Cool!

CODE

# semaphore gather

import aiohttp
import asyncio
import time


async def fetch(sema, url, session):
    async with sema, session.get(url) as response:
        return await response.json()


async def main():
    start_time = time.time()

    # ##################################################
    # This prints results after they are done
    # Using a Semaphore to create a pool of threads

    print('Semaphore gather')

    url = 'https://swapi.dev/api/people/{}'

    tasks = []
    # create instance of Semaphore
    sema = asyncio.Semaphore(5)

    async with aiohttp.ClientSession() as session:
        for i in range(1, 10):
            # pass Semaphore and session to every GET request
            task = fetch(sema, url.format(i), session)
            tasks.append(task)

        results = await asyncio.gather(*tasks)
        # print(results)
        for result in results:
            print(f'{result["name"]}')
            print(f'{result["birth_year"]}')
            print(f'{result["gender"]}')

    script_time = time.time() - start_time
    print(f'The script took {script_time} second !')


if __name__ == '__main__':
    asyncio.run(main())

Conclusion

As you can see the OG way of using requests is slower by a wide margin. It looks like Async: Gather might be the best way to go if you want to write some Async code.

Personally though, it all depends on what you are trying to get out of your code.  I can see using any of the ways described above from a quick and dirty script, to a daily time sensitive program.


Code Repo

GitHub - ephergent/AsyncTesting
Contribute to ephergent/AsyncTesting development by creating an account on GitHub.

REFERENCES

https://www.integralist.co.uk/posts/python-asyncio/
https://pawelmhm.github.io/asyncio/python/aiohttp/2016/04/22/asyncio-aiohttp.html
https://stackoverflow.com/questions/56523043/using-python-3-7-to-make-100k-api-calls-making-100-in-parallel-using-asyncio
https://stackoverflow.com/questions/40836800/python-asyncio-semaphore-in-async-await-function


My blog posts tagged with "Python Patterns" are designed to be a quick look reference for some Python code snippets I use a lot.  They are written to be a quick starting point for future projects so I do not need to type as much.