Running Tasks Concurrently in Python
Python offers multiple ways to run tasks concurrently, improving efficiency in I/O-bound programs. By combining tools like asyncio, gather, and create_task, you can handle multiple tasks without waiting for each to finish individually.
What You'll Learn
You will learn how to use async and await in Python to make your programs do many things at once. Instead of waiting for one task to finish, your program can start another task and come back later—making it faster and more efficient.
Concurrency vs. Parallelism
Concurrency means running multiple tasks "at the same time" but not necessarily in parallel — they can take turns on the same thread.
Parallelism means running tasks truly at the same time, on different CPU cores or threads.
In Python's async world, concurrency is achieved using asyncio. It works best for I/O-bound tasks like web requests or file operations.
Running Tasks with asyncio.create_task()
The asyncio.create_task() function lets you start a coroutine (an async function) right away — it runs in the background while your program keeps going. This means multiple tasks can run *at the same time* instead of one after another.
Here's a simple example where we "download" two files. Instead of downloading one, waiting, and then downloading the next — we launch both downloads immediately.
import asyncio
async def download_file(n):
print(f"Start downloading file {n}")
await asyncio.sleep(2)
print(f"Finished downloading file {n}")
async def main():
task1 = asyncio.create_task(download_file(1))
task2 = asyncio.create_task(download_file(2))
print("Both tasks started")
await task1
await task2
asyncio.run(main())
import asyncio
async def download_file(n):
print(f"Start downloading file {n}")
await asyncio.sleep(2)
print(f"Finished downloading file {n}")
async def main():
task1 = asyncio.create_task(download_file(1))
task2 = asyncio.create_task(download_file(2))
print("Both tasks started")
await task1
await task2
asyncio.run(main())
🔍 How It Works
- create_task() schedules the coroutine to run in the background. It returns a "task" object immediately — without waiting for it to finish.
- Both task1 and task2 start at the same time.
- The program then reaches await task1 and await task2, which pause until each download is complete.
- This way, both tasks "sleep" together, and total time taken is just 2 seconds, not 4!
Output:
Both tasks started
Start downloading file 1
Start downloading file 2
Finished downloading file 1
Finished downloading file 2
Both tasks started
Start downloading file 1
Start downloading file 2
Finished downloading file 1
Finished downloading file 2
🚀 Instead of waiting for one download to finish before starting the next, create_task() helps you start both at once. This is a key part of writing efficient async code — perfect for things like API calls, file downloads, or database queries.
Grouping Tasks with asyncio.gather()
The asyncio.gather() function lets you run multiple async functions at the same time and wait for all of them to finish. It's like saying, "Start all of these tasks now, and let me know when they’re all done."
This is useful when you want to collect results from multiple tasks running at once — like calling several APIs or reading many files.
import asyncio
async def fetch_data(source):
print(f"Fetching from {source}")
await asyncio.sleep(1)
return f"Data from {source}"
async def main():
results = await asyncio.gather(
fetch_data("API 1"),
fetch_data("API 2"),
fetch_data("API 3"),
)
print(results)
asyncio.run(main())
import asyncio
async def fetch_data(source):
print(f"Fetching from {source}")
await asyncio.sleep(1)
return f"Data from {source}"
async def main():
results = await asyncio.gather(
fetch_data("API 1"),
fetch_data("API 2"),
fetch_data("API 3"),
)
print(results)
asyncio.run(main())
🔍 How It Works
- fetch_data() is an async function that simulates getting data from a source (like an API).
- asyncio.gather() starts all three calls to fetch_data() at the same time.
- Each one pauses at await asyncio.sleep(1), and the event loop manages them efficiently in the background.
- Once all are finished, their results are returned together as a list.
Output:
Fetching from API 1
Fetching from API 2
Fetching from API 3
['Data from API 1', 'Data from API 2', 'Data from API 3']
Fetching from API 1
Fetching from API 2
Fetching from API 3
['Data from API 1', 'Data from API 2', 'Data from API 3']
✅ With gather(), all tasks run together and you get back a list of their results — in the same order you asked for them. This is a great way to group and manage multiple async operations at once.
🔍 Real-World Scenarios
- Calling multiple APIs at once
- Downloading multiple files simultaneously
- Querying multiple databases in parallel
- Running independent background tasks like logging and analytics
Async concurrency shines when dealing with many waiting tasks. By using create_task() and gather(), your code stays fast, clean, and responsive.
Example 1: Checking Multiple Websites at the Same Time
Imagine you want to check the status of several websites. If you check them one by one, it could take a while — especially if each takes time to respond. But with async, you can check all of them at once!
Below is an example using asyncio and asyncio.sleep() to simulate network delays while checking 3 different "websites":
import asyncio
async def check_website(site, delay):
print(f"Checking {site}...")
await asyncio.sleep(delay) # Simulating network delay
print(f"{site} is up!")
async def main():
await asyncio.gather(
check_website("google.com", 2),
check_website("openai.com", 1),
check_website("github.com", 3)
)
asyncio.run(main())
import asyncio
async def check_website(site, delay):
print(f"Checking {site}...")
await asyncio.sleep(delay) # Simulating network delay
print(f"{site} is up!")
async def main():
await asyncio.gather(
check_website("google.com", 2),
check_website("openai.com", 1),
check_website("github.com", 3)
)
asyncio.run(main())
🔍 How It Works
- check_website() is an async function that "checks" a site and waits for a simulated delay.
- Each check uses await asyncio.sleep() to mimic waiting for a real HTTP response.
- asyncio.gather() launches all three checks at the same time.
- They finish independently — whichever takes the least time finishes first.
- The total time is about 3 seconds (the longest one), not 6.
Sample Output:
Checking google.com...
Checking openai.com...
Checking github.com...
openai.com is up!
google.com is up!
github.com is up!
Checking google.com...
Checking openai.com...
Checking github.com...
openai.com is up!
google.com is up!
github.com is up!
🌐 This is a great pattern when you want to perform many independent I/O operations — like pinging servers, sending multiple emails, or loading data from APIs.
Frequently Asked Questions
What does it mean to run tasks concurrently in Python?
What does it mean to run tasks concurrently in Python?
It means running multiple operations at the same time without waiting for each to finish first — especially useful for I/O operations like downloads or API calls.
What is asyncio.create_task used for?
What is asyncio.create_task used for?
It starts an async task and lets it run in the background while your program continues doing other things.
How is asyncio.gather different from create_task?
How is asyncio.gather different from create_task?
gather runs several async functions at once and waits for all to finish, returning their results. create_task just schedules them to run.
Is concurrent Python code faster?
Is concurrent Python code faster?
Yes, especially when tasks spend time waiting — like downloading files or calling APIs. Concurrency helps save time by doing those things together.
Can I use asyncio for CPU-heavy tasks?
Can I use asyncio for CPU-heavy tasks?
No — asyncio is made for I/O-bound work. For CPU-heavy work like crunching data or processing images, use threading or multiprocessing instead.
What's Next?
Next, you'll dive into Python modules, which allow you to organize your code into separate files and reuse functionality across different programs.