Skip to content

Working with Large Datasets in Python

Working with large datasets can be challenging, especially if you're working with a database that contains millions of records. This is usually the case with companies.


In this tutorial, we'll explore some techniques for managing and processing large amounts of data using Python and SQLAlchemy.

1. Limiting Results

Like we have seen and used before, one of the simplest ways to work with large datasets is to limit the number of results returned by a query. This is especially useful when you're working with a table that contains millions of records and you only need a small subset of those records for your analysis.


In SQLAlchemy, you can limit the number of results returned by a query using the limit method. For example, to select the first 100 records from the employees table, you can use the following code:

from sqlalchemy import create_engine, MetaData, Table, select

engine = create_engine('mysql://user:password@localhost:3306/employees')
metadata = MetaData()
employees = Table('employees', metadata, autoload=True, autoload_with=engine)

query = select([employees]).limit(100)

with engine.connect() as conn:
    result = conn.execute(query)
    for row in result:
        print(row)

2. Using Pagination

Another technique for working with large datasets is pagination. Pagination involves breaking up the results of a query into smaller chunks, or pages, and fetching each page separately. This can help to reduce memory usage and improve performance when working with large datasets.


In SQLAlchemy, you can use the limit and offset methods to implement pagination. The limit method limits the number of records returned by the query, while the offset method specifies the starting point for the query.


For example, to fetch records 101-200 from the employees table, you can use the following code:

from sqlalchemy import create_engine, MetaData, Table, select

engine = create_engine('mysql://user:password@localhost:3306/employees')
metadata = MetaData()
employees = Table('employees', metadata, autoload=True, autoload_with=engine)

query = select([employees]).limit(100).offset(100)

with engine.connect() as conn:
    result = conn.execute(query)
    for row in result:
        print(row)

3. Chunking Data

Sometimes, even pagination isn't enough to handle really large datasets. In these cases, you can chunk your data into smaller pieces and process each chunk separately. This can help to reduce memory usage and improve performance.


In SQLAlchemy, you can use the fetchmany method to fetch a specified number of rows at a time. For example, to fetch 1000 rows at a time from the employees table, you can use the following code:

from sqlalchemy import create_engine, MetaData, Table, select

engine = create_engine('mysql://user:password@localhost:3306/employees')
metadata = MetaData()
employees = Table('employees', metadata, autoload=True, autoload_with=engine)

query = select([employees])

with engine.connect() as conn:
    chunk_size = 1000
    offset = 0
    while True:
        chunk = conn.execute(query.limit(chunk_size).offset(offset))
        results = chunk.fetchall()
        if not results:
            break
        for row in results:
            print(row)
        offset += chunk_size
This code fetches 1000 rows at a time from the employees table and processes each chunk separately.

4. Streaming Data

When working with large datasets, it may be useful to stream the data from the database rather than loading the entire dataset into memory. This can be done using the yield_per() method in SQLAlchemy. The yield_per() method will fetch a certain number of rows at a time, allowing you to process the data in smaller chunks.


Here is an example of how to use yield_per() to stream data from the employees table in the MySQL employees database:

from sqlalchemy.orm import sessionmaker

# create a session to work with the database
Session = sessionmaker(bind=engine)
session = Session()

# stream the data from the employees table in chunks of 1000 rows
query = session.query(employees).yield_per(1000)

# process the data
for employee in query:
    # do something with the employee data
    print(employee.emp_no, employee.first_name, employee.last_name)

5. Batch Processing

When updating or inserting large amounts of data, it can be more efficient to do it in batches rather than one row at a time. This can be done using the add_all() method in SQLAlchemy.


Here is an example of how to use add_all() to insert a batch of employees into the employees table:

from sqlalchemy.orm import sessionmaker

# create a session to work with the database
Session = sessionmaker(bind=engine)
session = Session()

# create a list of employees to insert
employees_list = [
    {'emp_no': 10001, 'first_name': 'John', 'last_name': 'Doe'},
    {'emp_no': 10002, 'first_name': 'Jane', 'last_name': 'Smith'},
    # more employees...
]

# insert the employees in batches of 1000
batch_size = 1000
for i in range(0, len(employees_list), batch_size):
    batch = employees_list[i:i+batch_size]
    session.add_all([employees(**e) for e in batch])
    session.commit()

6. Parallel Processing

Another technique for processing large datasets is to use parallel processing. This can be done using the concurrent.futures module in Python. Here is an example of how to use the ThreadPoolExecutor class to execute a function in parallel on a large dataset:

from concurrent.futures import ThreadPoolExecutor
from sqlalchemy.orm import sessionmaker

# create a session to work with the database
Session = sessionmaker(bind=engine)
session = Session()

# define a function to process an employee record
def process_employee(employee):
    # do something with the employee data
    print(employee.emp_no, employee.first_name, employee.last_name)

# query the employees table
query = session.query(employees)

# create a ThreadPoolExecutor with 4 threads
with ThreadPoolExecutor(max_workers=4) as executor:
    # submit the process_employee function for each row in the query
    futures = [executor.submit(process_employee, row) for row in query]

    # wait for all the functions to complete
    for future in futures:
        future.result()

These are just a few techniques for managing and processing large amounts of data in Python and SQLAlchemy. Depending on your specific use case, other techniques such as indexing, partitioning, and caching may also be useful.