SSW Foursquare

Orphaned Rules - 2 Rules


The rules listed below have no parent category
  1. Do you optimize your TinaCMS project for clarity, performance, and reliable builds?

    Structuring and optimizing your TinaCMS project is essential to achieve clarity, enhance performance, and prevent build failures. Poorly optimized projects can lead to slow site performance, increased server load, and even failed builds due to excessive or inefficient data requests.

    Let’s explore how to structure your project effectively and apply best practices to boost performance both in runtime and during the build process.

    1. Structuring your TinaCMS Architecture

    When working with large datasets or generating multiple subcomponents, following best practices is crucial to maintain performance and clarity.

    ❌ Bad practices

    • Using deeply nested schemas with nested references

      • Complex and deeply nested schemas increase the complexity of the project, making it harder to manage and more prone to build failures
      • They can also lead to inefficient data fetching, further slowing down both runtime and build processes

    ✅ Good practices

    • Making a single request at a top-level server component and using React Context or a state management library

      • Data fetched at the top level can be stored in a React Context or a global state management solution (e.g., Redux). This allows all components to access the data without the need to pass props manually
      • This approach ensures better scalability, as subcomponents can access the necessary data directly from the context or store, eliminating redundant API calls and avoiding prop drilling
    export default async function Home({ params }: HomePageProps) {
        const location = params.location;
    
        const websiteProps = await client.queries.website({
            relativePath: `${location}/website.md`,
        });
    
        const { conferencesData, footerData, speakers } = websiteProps.data;
    
        return (
            <ConferenceContext.Provider value={conferencesData}>
                <FooterContext.Provider value={footerData}>
                    <PageTransition>
                        <HomeComponent speakers={speakers} />
                    </PageTransition>
                </FooterContext.Provider>
            </ConferenceContext.Provider>
        );
    }
    
    export async function generateStaticParams() {
        const contentDir = path.join(process.cwd(), 'content/websites');
        const locations = await fs.readdir(contentDir);
    
        return locations.map((location) => ({ location }));
    }

    Figure: This code provides conferencesData and footerData via contexts, while passing speakers directly as props to HomeComponent for immediate use

    • Caching data at a Top-level and accessing it when necessary

      • If passing props is not feasible (e.g., when a component depends on Next.js router information), you should make a general top-level request, cache the data, and then access it directly from the cache within the component
      • This approach ensures efficient data retrieval and reduces the server load at build time

    2. Improving Runtime Performance

    Optimizing runtime performance is key to delivering a fast and responsive user experience.

    ❌ Bad practices

    • Using client-side requests instead of relying on cached data from build process

      • This approach can negate the benefit of static site generation, where data is fetched and cached during the process
      • Making too many client-side requests increses server load and slows down the application

    ✅ Good practices

    • Using static site generation (SSG) to fetch and cache content during the build

      • With TinaCMS, data can be fetched at build time, this will :

        • minimizes dynamic fetching and enhances performance
        • faster load time
        • less strain on the server

    3. Improving Build Performance

    To ensure smooth and reliable builds, it’s important to follow best practices that prevent excessive server load and manage data efficiently.

    ✅ Best practices

    • Write custom GraphQL queries

      • You can improve data retreival by creating your own GraphQL queries
        Auto-generated GraphQL queries are not optimized, as a result, they may include nested objects with redundant data. For example, recipes that include an ingredients object, which in turn includes the same recipes again. Creating custom queries can reduce the size of objects and improve performance
  2. Do you optimize your EF Core queries?

    EF Core provides a powerful way to interact with databases using .NET, but poor query optimization can lead to significant performance issues. Developers often fall into common pitfalls like inserting data in a loop, running excessive queries, or keeping track of too many entities. By following these best practices, you can improve performance and reduce database load.

    1. Perform bulk insertions instead of looping

    One of the most common mistakes is inserting entities one by one inside a loop. This results in multiple round trips to the database, which is extremely inefficient.

    foreach (var item in items)
    {
        context.MyEntities.Add(item);
        context.SaveChangesAsync();
    }

    Figure: Bad example – Calling SaveChangesAsync() inside a loop causes multiple database hits

    Instead, use bulk insertion techniques like AddRangeAsync() and call SaveChangesAsync() once:

    context.MyEntities.AddRangeAsync(items);
    context.SaveChangesAsync();

    Figure: Good example – Using AddRangeAsync() minimizes database calls and improves performance

    For even larger datasets, consider using external libraries like EF Core Bulk Extensions for optimized bulk insert performance.


    2. Batch queries to reduce database calls

    When retrieving data in a loop, developers sometimes execute multiple queries instead of batching them upfront.

    foreach (var id in ids)
    {
        var entity = context.MyEntities.FirstOrDefaultAsync(e => e.Id == id);
        ProcessEntity(entity);
    }

    Figure: Bad example – Each iteration performs a separate database query, leading to N queries

    If you know the approximate size of your dataset, and it is suitable for your database server, retrieve all records in a single query before processing:

    var entities = context.MyEntities.Where(e => ids.Contains(e.Id)).ToListAsync();
    foreach (var entity in entities)
    {
        ProcessEntity(entity);
    }

    Figure: Good example – Fetches all required records in one query, reducing database load

    Handling large datasets

    If the dataset is too large to fetch at once, consider batching:

    int batchSize = 100;
    for (int i = 0; i < ids.Count; i += batchSize)
    {
        var batch = ids.Skip(i).Take(batchSize).ToList();
        var entities = context.MyEntities.Where(e => batch.Contains(e.Id)).ToList();
        ProcessEntities(entities);
    }

    Figure: OK example – Processes data in batches to balance efficiency and memory usage

    The benefit of loading data into memory at once is that you can process it more efficiently without making multiple database calls. This opens up opportunities for parallel processing and other optimizations on the application side.


    3. Stop tracking entities when not needed

    By default, EF Core tracks entities, which increases memory usage and slows down performance. If you're just reading data without modifying it, disable tracking.

    var entities = context.MyEntities.ToListAsync();

    Figure: Bad example – Unnecessary tracking increases memory usage

    var entities = context.MyEntities.AsNoTracking().ToListAsync();

    Figure: Good example – AsNoTracking() prevents EF Core from tracking entities, reducing memory usage

    Note: This optimization technique is not always applicable. If you need to modify entities (context.SaveChangesAsync()), you should leave tracking enabled to ensure changes are detected. Only disable tracking for read-only queries. see our Rules to Better Entity Framework.


    4. Call SaveChangesAsync() less frequently

    Each call to SaveChangesAsync() triggers a database transaction and commits changes to the database, which is costly in terms of performance.

    foreach (var entity in entities)
    {
        entity.UpdatedAt = DateTime.UtcNow;
        context.SaveChangesAsync(); 
    }

    Figure: Bad example – Calling SaveChangesAsync() in a loop causes multiple transactions, slowing down performance

    foreach (var entity in entities)
    {
        entity.UpdatedAt = DateTime.UtcNow;
    }
    context.SaveChangesAsync();

    Figure: Good example – Updates all entities in memory first, then commits changes in a single transaction

    Whenever possible, defer calling SaveChangesAsync() until after all modifications have been made.


    5. Use transactions to speed up performance

    EF Core automatically wraps SaveChangesAsync() calls in a transaction, but explicit transactions allow multiple operations to be committed together, improving performance and reducing the risk of partial failures. When working with large data modifications, batch processing, or multiple related database operations, using transactions ensures data consistency and reduces performance overhead.

    foreach (var entity in entities)
    {
        context.Add(entity);
        await context.SaveChangesAsync(); // Each iteration creates a separate transaction
    }

    Figure: Bad example – Each SaveChangesAsync() call inside the loop creates a separate transaction, leading to unnecessary database overhead

    using var transaction = await context.Database.BeginTransactionAsync();
    try
    {
        foreach (var entity in entities)
        {
            context.Add(entity);
        }
    
        await context.SaveChangesAsync(); // One transaction instead of multiple
        await transaction.CommitAsync();
    }
    catch
    {
        await transaction.RollbackAsync();
    }

    Figure: Good example – Using an explicit transaction ensures all entities are added in a single transaction, reducing overhead and improving reliability

    Combining transactions with batching

    For very large datasets, committing everything in one transaction may still be inefficient. Instead, process data in batches while keeping transactions:

    int batchSize = 100;
    using var transaction = await context.Database.BeginTransactionAsync();
    
    try
    {
        for (int i = 0; i < entities.Count; i += batchSize)
        {
            var batch = entities.Skip(i).Take(batchSize).ToList();
            context.AddRange(batch);
            await context.SaveChangesAsync();
        }
    
        await transaction.CommitAsync();
    }
    catch
    {
        await transaction.RollbackAsync();
    }

    Figure: Good example – Transactions combined with batching allow efficient processing of large datasets while keeping database transactions minimal

    When to use transactions

    • When performing multiple related Add, Update, or Delete operations that should either all succeed or all fail (atomicity).
    • When inserting or modifying large amounts of data, reducing multiple individual transactions into a single one.
    • When using batch processing to ensure database efficiency while still maintaining consistency.
    • When working with multiple tables and ensuring referential integrity across related records.

    Summary: ✅ Optimize EF Core queries for better performance

    • Use bulk insertions (AddRangeAsync()) instead of inserting in a loop
    • Fetch data in a single query instead of querying in a loop
    • Batch large datasets to balance efficiency and memory usage
    • Use AsNoTracking() when you don’t need entity tracking
    • Call SaveChangesAsync() less frequently to reduce database overhead
    • Use transactions for multiple operations to improve efficiency and reliability

    Following these practices ensures your EF Core queries are efficient, reducing database load and improving application performance.

We open source.Loving SSW Rules? Star us on GitHub. Star
Stand by... we're migrating this site to TinaCMS