Monday, 24 February 2025

Azure Devops

 Azure DevOps (ADO), Github Enterprise, or Gitlab

Infrastructure automation tooling such as Terraform and Ansible

CI/CD best practices and implementation

AWS infrastructure and cloud native services such as Lambda and EKS

Windows Server automation

Various git patterns such as gitflow and trunk based development

Working within a highly regulated industry such as Financials Services or Healthcare

Ideally you will also have experience with:

Powershell, bash, Python, and/or golang development

Supporting .net (Core & Framework), node.js, and golang applications

Containers (Windows and Linux)

Kubernetes

Database automation & administration

Cloud cost management & automation

Network/Cloud security core concepts and tooling

AppSec/DevSecOps tooling such as Veracode SAST and DAST

Automated software testing (unit, component, API, functional)

GitOps methodologies

Effective communication skills, both verbal and written, with strong relationship, collaborative, and organization skills.


Key Skills

10+ years of Proven experience as a DevOps Engineer or similar software engineering role

Relevant business / computer science degree or relevant experience

Experienced technology leader with a minimum of 10+ years of devops implementation and data platform architecture experience with deep technology expertise

Understanding of cloud architecture and deployment models (IAAS,PAAS,SAAS)

Experience with Azure devops services (e.g. pipelines, boards,repos , artifacts)

Hands on experience on automation tools like terraform, ARM templates, bicep for infrastructure as code (Iac)

Good experience on configuration management tools such as Ansible, chef or puppet

Knowledge of languages like powershell, python, bash or shell for scripting and automation

Proficiency in docker and Kubernetes for containerized application development and deployment

Familiarity with tools like Azure Monitor, log analytics, app insights and third party tools like Grafana or ELK stack

Understanding of devsecops practices and integrating security into CI/CD pipelines

Understanding the networking concepts like DNS,load balancing, Firewalls, vnet, NSG, route, service tag etc

Strong troubleshooting skills for resolving deployment issues and improving system

Work with the respective business units like Finance, Risk, Marketing, BIU, Retail, Analytics, etc..


DevOps Engineer (Azure, Jenkins, Kubernetes, TFS, Harness)

In todays highly competitive retirement market, firms must not only deliver superior customer value, but cater to challenging customer requirements and solve complex problems efficiently.

Throughout the industry there is mounting pressure on organizations to do more, requiring a clear technology strategy that not only addresses the demands of today, but also enables the growth and performance of tomorrow.

Sounds Interesting?


What you will be doing

This position is with Retirement Engineering group with following responsibilities:

Maintaining Jenkins and TFS build pipelines.

Working on Harness, Kubernets and Azure for Continuous Delivery.

Conforming to DevOps processes, work instructions and standards.

Provide support, training and assistance to others.

Provide prompt escalation of functional, technical and project issues to Management.

Identifying opportunities for continuous improvement in all that we do.

Makes recommendations and directs improvements to the software development lifecycle process.

Excellent analytical, decision-making, problem-solving, interpersonal, team building, negotiation, conflict management and time management skills

What you bring:

Skillset:

Jenkins pipelines setup and maintenance,

Work experience in Azure, Kubernetes, Harness, TFS build pipelines.

Strong problem-solving skills.

Detail and Team oriented

Excellent verbal and written communication

Ability to focus on deadlines and deliverables.

Ability to be flexible as new projects are assigned.

Working knowledge of web technologies (i.e., REST service)


Qualifications

Degree or equivalent


Competencies

Fluent in English

Excellent communicator ability to discuss technical and commercial solutions to internal and external parties and adapt depending on the technical or business focus of the discussion.

Attention to detail track record of authoring high quality documentation.

Organized approach manage and adapt priorities according to client and internal requirements.

Self-starter but team mindset - work autonomously and as part of a global team

What we offer you

A multifaceted job with a high degree of responsibility and a broad spectrum of opportunities

A broad range of professional education and personal development possibilities FIS is your final career step!

A competitive salary and benefits

A variety of career development tools, resources, and opportunities

Role: DevOps Engineer

Industry Type: Recruitment / Staffing

Department: Engineering - Software & QA

Employment Type: Full Time, Permanent

Role Category: DevOps

Azure Data Engineering

 Job description


Are you passionate about building scalable data solutions in the cloud? We are looking for an experienced Data Engineer skilled in (AWS/Azure) technologies to join our dynamic team. This role requires expertise in designing, developing, and optimizing data pipelines, models, and cloud-based data architectures.



Your Future Employer: A leading global analytics and digital solutions company empowering businesses with cutting-edge cloud and data engineering capabilities.



Responsibilities:


1. Designing and implementing scalable data pipelines using (AWS/Azure) services.


2. Developing and optimizing ETL workflows using (AWS Glue/Azure Data Factory).


3. Building and maintaining data models using SQL, (Redshift/Azure Analysis Services).


4. Automating CI/CD pipelines leveraging (AWS DevOps/Azure DevOps).


5. Ensuring data governance, security, and compliance best practices.


6. Collaborating with business and technology teams to define data requirements.



Requirements:


1. Bachelor's degree in Computer Science, IT, or a related field.


2. 5+ years of experience in data engineering with expertise in (AWS/Azure) platforms.


3. Strong experience with SQL, Python, Spark, and cloud-based data technologies.


4. Hands-on experience with (AWS Glue, Redshift/Azure Data Factory, SSAS).


5. Knowledge of CI/CD pipeline automation using (AWS DevOps/Azure DevOps).



Whats in it for you?


1. Opportunity to work with cutting-edge cloud and data technologies.


2. A collaborative and innovative work environment.


3. Career growth and professional development opportunities.



Reach Us: If you think this role is aligned with your career, kindly write an email along with your updated CV to shreya.mohan@crescendogroup.in for a confidential discussion on the role.


As an Azure Data Engineer, the candidate is expected to be specializing in designing, implementing, and managing large-scale data solutions on the Microsoft Azure cloud platform. They possess expertise in various aspects of data engineering like data storage, data integrations, analytics etc using Azure data services. Ideal candidate will have a strong background in data engineering, Azure cloud services and data processing technologies.


# Requirements:

1. 5+ years of experience in data engineering, preferably on Microsoft Azure. This will increase with senior positions as other DE roles we have.

2. Strong knowledge of Azure cloud services, including Azure Data Factory, Azure Databricks, Azure Synapse Analytics, and Azure Storage.

3. Experience with data processing technologies, such as Apache Spark, Apache Hadoop, and SQL.

4. Strong understanding of data modeling, data warehousing, and data governance.

5. Experience with data security, compliance, and regulatory requirements.

6. Strong programming skills in languages such as Python, Scala, or Java.

7. Experience with agile development methodologies and version control systems such as Git

8. Azure certifications, such as Azure Data Engineer Associate or Azure Solutions Architect expert.

9. Experience with containerization technologies such as Docker.

10. Knowledge of data visualization tools, such as Power BI, Tableau etc

11. Experience in machine learning and AI technologies will be added advantage.


Soft Skills:


Problem-solving: Solve complex data problems effectively.

Communication: Communicate clearly with both technical and non-technical team members.

Attention to detail: Pay close attention to detail to ensure accuracy.

Adaptability and learning: Stay up to date with new technologies and trends.

Teamwork: Collaborate well with others for project success.

Leadership and mentorship: Take on leadership roles and mentor junior team members for growth.




Create and maintain data pipelines that efficiently move data from various sources to storage and processing systems


Extract, transform, and load (ETL) data using tools like Azure Data Factory


Work with Azure SQL Database, Azure Data Lake Storage, Azure Cosmos DB, and Azure Blob Storage to design scalable and performant data storage solutions


Use technologies like Azure Databricks and Apache Spark to process and analyze large volumes of data


Implement data governance procedures and ensure compliance with security policies




Skills:



Proficiency in SQL and other data query languages


Experience with Azure data storage solutions


Knowledge of data integration tools


Familiarity with data processing frameworks


Understanding of data modeling and schema design principles


Ability to work with large datasets and perform data analysis


Strong problem-solving and troubleshooting skills



Etl, Azure Cosmo Db, Spark, Python, Sql, Azure, Data Engeneering





We are seeking a skilled Azure Data Engineer to join our team and play a key role in designing, implementing, and managing our data solutions on the Azure cloud platform. The ideal candidate will have hands-on experience with Azure data services and a strong background in data engineering and analytics.

Responsibilities:

Design, develop, and deploy scalable and efficient data pipelines on the Azure cloud platform using services such as Azure Data Factory, Azure Databricks, and Azure Synapse Analytics.

Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and translate them into technical solutions.

Build and maintain data warehouses, data lakes, and other data storage solutions on Azure.

Implement data governance and security best practices to ensure data quality, privacy, and compliance.

Optimize data processing and storage for performance, cost, and scalability.

Monitor, troubleshoot, and optimize data pipelines and systems for reliability and efficiency.

Stay up-to-date with the latest Azure data services and technologies and recommend best practices and optimizations.

Requirements:

Bachelor's degree in Computer Science, Engineering, or related field.



A bachelors degree in Computer Science or related field with 6-10 years of technology experience

Strong experience in System Integration, Application Development or Data-Warehouse projects, across technologies used in the enterprise space

Software development experience using:Object-oriented languages (e.g. Python, PySpark,) and frameworks

Database programming using any flavours of SQL

Expertise in relational and dimensional modelling, including big data technologies

Exposure across all the SDLC process, including testing and deployment

Expertise in Microsoft Azure is mandatory including components like Azure Data Factory, Azure Data Lake Storage, Azure SQL, Azure DataBricks, HD Insights, ML Service etc.

Good knowledge of Python and Spark are required

Good understanding of how to enable analytics using cloud technology and ML Ops

Experience in Azure Infrastructure and Azure Dev Ops will be a strong plus

Proven track record in keeping existing technical skills and developing new ones, so that you can make strong contributions to deep architecture discussions around systems and applications in the cloud (Azure)

Characteristics of a forward thinker and self-starter

Ability to work with a global team of consulting professionals across multiple projects

Knack for helping an organization to understand application architectures and integration approaches, to architect advanced cloud-based solutions, and to help launch the build-out of those systems


Monday, 17 February 2025

Angular calander issues

 

date formate issue in angular material . getting date less than one number.

resolved using 

   let sDate=this.Date_Prepared_Start;
     let eDate=this.Date_Prepared_End;
     sDate = new Date(Date.UTC(sDate.getFullYear(), sDate.getMonth(), sDate.getDate()));
     eDate = new Date(Date.UTC(eDate.getFullYear(), eDate.getMonth(), eDate.getDate()));

SQL Pagination

 

Database Schema

I created a simple table to demonstrate pagination techniques. The table is seeded with 1,000,000 records for testing purposes, which should be enough to show the performance difference between offset and cursor pagination.

We'll use the following SQL schema for the examples:

CREATE TABLE user_notes (
    id uuid NOT NULL,
    user_id uuid NOT NULL,
    note character varying(500),
    date date NOT NULL,
    CONSTRAINT pk_user_notes PRIMARY KEY (id)
);

And here's the C# class representing the UserNote entity:

public class UserNote
{
    public Guid Id { get; set; }
    public Guid UserId { get; set; }
    public string? Note { get; set; }
    public DateOnly Date { get; set; }
}

I will use PostgreSQL as the database, but the concepts also apply to other databases.

Offset Pagination: The Traditional Approach

Offset pagination uses Skip and Take operations. We skip a certain number of rows and take a fixed number of rows. These usually translate to OFFSET and LIMIT in SQL queries.

Here's an example of offset pagination in ASP.NET Core:

app.MapGet("/offset", async (
    AppDbContext dbContext,
    int page = 1,
    int pageSize = 10,
    CancellationToken cancellationToken = default) =>
{
    if (page < 1) return Results.BadRequest("Page must be greater than 0");
    if (pageSize < 1) return Results.BadRequest("Page size must be greater than 0");
    if (pageSize > 100) return Results.BadRequest("Page size must be less than or equal to 100");

    var query = dbContext.UserNotes
        .OrderByDescending(x => x.Date)
        .ThenByDescending(x => x.Id);

    // Offset pagination typically counts the total number of items
    var totalCount = await query.CountAsync(cancellationToken);
    var totalPages = (int)Math.Ceiling(totalCount / (double)pageSize);

    // Skip and take the required number of items
    var items = await query
        .Skip((page - 1) * pageSize)
        .Take(pageSize)
        .ToListAsync(cancellationToken);

    return Results.Ok(new
    {
        Items = items,
        Page = page,
        PageSize = pageSize,
        TotalCount = totalCount,
        TotalPages = totalPages,
        HasNextPage = page < totalPages,
        HasPreviousPage = page > 1
    });
});

Note that I'm sorting the results by Date and Id in descending order. This ensures consistent results when paginating.

Here's the generated SQL for offset pagination:

-- This query is sent first
SELECT count(*)::int FROM user_notes AS u;

-- Followed by the actual data query
SELECT u.id, u.date, u.note, u.user_id
FROM user_notes AS u
ORDER BY u.date DESC, u.id DESC
LIMIT @pageSize OFFSET @offset;

Limitations of Offset Pagination:

  1. Performance degrades as offset increases because the database must scan and discard all rows before the offset
  2. Risk of missing or duplicating items when data changes between pages
  3. Inconsistent results with concurrent updates

Cursor-Based Pagination: A Faster Approach

Cursor pagination uses a reference point (cursor) to fetch the next set of results. This reference point is typically a unique identifier or a combination of fields that define the sort order.

I'll use the Date and Id fields to create a cursor for our UserNotes table. The cursor is a composite of these two fields, allowing us to paginate efficiently.

Here's an example of cursor pagination in ASP.NET Core:

app.MapGet("/cursor", async (
    AppDbContext dbContext,
    DateOnly? date = null,
    Guid? lastId = null,
    int limit = 10,
    CancellationToken cancellationToken = default) =>
{
    if (limit < 1) return Results.BadRequest("Limit must be greater than 0");
    if (limit > 100) return Results.BadRequest("Limit must be less than or equal to 100");

    var query = dbContext.UserNotes.AsQueryable();

    if (date != null && lastId != null)
    {
        // Use the cursor to fetch the next set of results
        // If we were sorting in ASC order, we'd use > instead of <
        query = query.Where(x => x.Date < date || (x.Date == date && x.Id <= lastId));
    }

    // Fetch the items and determine if there are more
    var items = await query
        .OrderByDescending(x => x.Date)
        .ThenByDescending(x => x.Id)
        .Take(limit + 1)
        .ToListAsync(cancellationToken);

    // Extract the cursor and ID for the next page
    bool hasMore = items.Count > limit;
    DateOnly? nextDate = hasMore ? items[^1].Date : null;
    Guid? nextLastId = hasMore ? items[^1].Id : null;

    // Remove the extra item before returning results
    items.RemoveAt(items.Count - 1);

    return Results.Ok(new
    {
        Items = items,
        NextDate = nextDate,
        NextLastId = nextLastId,
        HasMore = hasMore
    });
});

The sort order is the same as in the offset pagination example. However, the sort order is critical for consistent results with cursor pagination. Because the Date isn't a unique value in our table, we use the Id field to handle ties. This ensures that we don't miss or duplicate items when paginating.

Here's the generated SQL for cursor pagination:

SELECT u.id, u.date, u.note, u.user_id
FROM user_notes AS u
WHERE u.date < @date OR (u.date = @date AND u.id <= @lastId)
ORDER BY u.date DESC, u.id DESC
LIMIT @limit;

Note that there's no OFFSET in the query. We're directly seeking the rows based on the cursor, which is more efficient than offset pagination.

The COUNT query is omitted in cursor pagination because we're not counting the total number of items. This can be a limitation if you need to display the total number of pages upfront. However, the performance benefits of cursor pagination often outweigh this limitation.

Limitations of Cursor Pagination:

  1. If users need to change sort fields dynamically, cursor pagination becomes significantly more complicated since the cursor must incorporate all sort conditions
  2. Users can't jump to a specific page number - they must traverse sequentially through the pages
  3. More complex to implement correctly compared to offset pagination, especially when handling ties and ensuring stable ordering

Examining the SQL Execution Plans

I wanted to compare the execution plans for offset and cursor pagination. I used the EXPLAIN ANALYZE command in PostgreSQL to see the query plans.

Here's the offset pagination query:

SELECT u.id, u.date, u.note, u.user_id
FROM user_notes AS u
ORDER BY u.date DESC, u.id DESC
LIMIT 1000 OFFSET 900000;

I'm intentionally skipping 900,000 rows to exaggerate the performance impact. After that, we fetch the next 1,000 rows.

Here's the query plan for offset pagination:

EXPLAIN ANALYZE SELECT u.id, u.date, u.note, u.user_id
FROM user_notes AS u
ORDER BY u.date DESC, u.id DESC
LIMIT 1000 OFFSET 900000;

---
Limit  (cost=165541.59..165541.71 rows=1 width=52) (actual time=695.026..701.406 rows=1000 loops=1)
  ->  Gather Merge  (cost=68312.50..165541.59 rows=833334 width=52) (actual time=342.475..684.567 rows=901000 loops=1)
        Workers Planned: 2
        Workers Launched: 2
        ->  Sort  (cost=67312.48..68354.15 rows=416667 width=52) (actual time=327.846..450.295 rows=300841 loops=3)
              Sort Key: date DESC, id DESC
              Sort Method: external merge  Disk: 20440kB
              Worker 0:  Sort Method: external merge  Disk: 18832kB
              Worker 1:  Sort Method: external merge  Disk: 18912kB
              ->  Parallel Seq Scan on user_notes u  (cost=0.00..14174.67 rows=416667 width=52) (actual time=1.035..22.876 rows=333333 loops=3)
Planning Time: 0.050 ms
JIT:
  Functions: 8
  Options: Inlining false, Optimization false, Expressions true, Deforming true
  Timing: Generation 0.243 ms (Deform 0.111 ms), Inlining 0.000 ms, Optimization 0.270 ms, Emission 4.085 ms, Total 4.598 ms
Execution Time: 704.217 ms

The total execution time is 704.217 ms for offset pagination.

Here's the query returning the same set of rows using cursor pagination. I had to hardcode the @date and @lastId values for this comparison:

SELECT u.id, u.date, u.note, u.user_id
FROM user_notes AS u
WHERE u.date < @date OR (u.date = @date AND u.id <= @lastId)
ORDER BY u.date DESC, u.id DESC
LIMIT 1000;

Finally, here's the query plan for cursor pagination:

EXPLAIN ANALYZE SELECT u.id, u.date, u.note, u.user_id FROM user_notes AS u WHERE u.date < @date OR (u.date = @date AND u.id <= @lastId) ORDER BY u.date DESC, u.id DESC LIMIT 1000; --- Limit (cost=20605.63..20722.31 rows=1000 width=52) (actual time=37.993..40.958 rows=1000 loops=1) -> Gather Merge (cost=20605.63..30419.62 rows=84114 width=52) (actual time=37.992..40.921 rows=1000 loops=1) Workers Planned: 2 Workers Launched: 2 -> Sort (cost=19605.61..19710.75 rows=42057 width=52) (actual time=24.611..24.630 rows=811 loops=3) Sort Key: date DESC, id DESC Sort Method: top-N heapsort Memory: 240kB Worker 0: Sort Method: top-N heapsort Memory: 239kB Worker 1: Sort Method: top-N heapsort Memory: 238kB -> Parallel Seq Scan on user_notes u (cost=0.00..17299.67 rows=42057 width=52) (actual time=0.009..21.462 rows=33333 loops=3) Filter: ((date < @date::date) OR ((date = @date::date) AND (id <= @lastId::uuid))) Rows Removed by Filter: 300000 Planning Time: 0.063 ms Execution Time: 40.993 ms

The total execution time for cursor pagination is 40.993 ms.

A whopping 17x performance improvement with cursor pagination compared to offset pagination!

The performance with cursor pagination is consistent regardless of the page depth. This is because we're directly seeking the rows based on the cursor, which is more efficient than offset pagination. It's a huge advantage over offset pagination, especially for large datasets.

Adding Indexes for Cursor Pagination

I also tested the impact of indexes on cursor pagination. I created a composite index on the Date and Id fields to speed up the queries. Or so I thought...

Here's the SQL command to create the composite index:

CREATE INDEX idx_user_notes_date_id ON user_notes (date DESC, id DESC);

The index is created in descending order to match the sort order in our queries.

Let's see the query plan for cursor pagination with the composite index:

EXPLAIN ANALYZE SELECT u.id, u.date, u.note, u.user_id FROM user_notes AS u WHERE u.date < @date OR (u.date = @date AND u.id <= @lastId) ORDER BY u.date DESC, u.id DESC LIMIT 1000; --- Limit (cost=0.42..816.55 rows=1000 width=52) (actual time=298.534..298.924 rows=1000 loops=1) -> Index Scan using idx_user_notes_date_id on user_notes u (cost=0.42..82376.42 rows=100936 width=52) (actual time=298.532..298.888 rows=1000 loops=1) Filter: ((date < @date::date) OR ((date = @date::date) AND (id <= @lastId::uuid))) Rows Removed by Filter: 900000 Planning Time: 0.068 ms Execution Time: 298.955 ms

We have an Index Scan using the composite index. However, the execution time is 298.955 ms, which is slower than the previous query without the index.

This might be because the dataset is too small to benefit from the index. I have only 1,000,000 records in the table, which might not be enough to see the performance improvement with the index.

But wait, there's more to it!

What if we were to use a tuple comparison in SQL?

EXPLAIN ANALYZE SELECT u.id, u.date, u.note, u.user_id FROM user_notes AS u WHERE (u.date, u.id) <= (@date, @lastId) ORDER BY u.date DESC, u.id DESC LIMIT 1000; --- Limit (cost=0.42..432.81 rows=1000 width=52) (actual time=0.020..0.641 rows=1000 loops=1) -> Index Scan using idx_user_notes_date_id on user_notes u (cost=0.42..43817.85 rows=101339 width=52) (actual time=0.019..0.606 rows=1000 loops=1) Index Cond: (ROW(date, id) <= ROW(@date::date, @lastId::uuid)) Planning Time: 0.060 ms Execution Time: 0.668 ms

Finally, the index is working. The execution time is 0.668 ms, which is significantly faster than the previous queries.

The query optimizer cannot determine whether the composite index can be used for row-level comparison. However, the index is effectively used with a tuple comparison.

How do you translate this to EF Core?

The Postgres provider has EF.Functions.LessThanOrEqual, which accepts a ValueTuple as an argument. We can use it to produce a (u.date, u.id) <= (@date, @lastId) comparison in the query. And this will utilize the composite index.

query = query.Where(x => EF.Functions.LessThanOrEqual(
    ValueTuple.Create(x.Date, x.Id),
    ValueTuple.Create(date, lastId)));

Encoding the Cursor

Here's a small utility class for encoding and decoding the cursor. We'll use this to encode the cursor in the URL and decode it when fetching the next set of results.

The clients will receive the cursor as a Base64-encoded string. They don't need to know the internal structure of the cursor.

using Microsoft.AspNetCore.Authentication; // For Base64UrlTextEncoder public sealed record Cursor(DateOnly Date, Guid LastId) { public static string Encode(DateOnly date, string lastId) { var cursor = new Cursor(date, lastId); string json = JsonSerializer.Serialize(cursor); return Base64UrlTextEncoder.Encode(Encoding.UTF8.GetBytes(json)); } public static Cursor? Decode(string? cursor) { if (string.IsNullOrWhiteSpace(cursor)) { return null; } try { string json = Encoding.UTF8.GetString(Base64UrlTextEncoder.Decode(cursor)); return JsonSerializer.Deserialize<Cursor>(json); } catch { return null; } } }

Here's an example of encoding and decoding the cursor:

string encodedCursor = Cursor.Encode( new DateOnly(2025, 2, 15), Guid.Parse("019500f9-8b41-74cf-ab12-25a48d4d4ab4")); // Result: // eyJEYXRlIjoiMjAyNS0wMi0xNSIsIkxhc3RJZCI6IjAxOTUwMGY5LThiNDEtNzRjZi1hYjEyLTI1YTQ4ZDRkNGFiNCJ9 Cursor decodedCursor = Cursor.Decode(encodedCursor); // Result: // { // "Date": "2025-02-15", // "LastId": "019500f9-8b41-74cf-ab12-25a48d4d4ab4" // }

Summary

While offset pagination is simpler to implement, it suffers from significant performance degradation at scale. My tests showed a 17x slowdown compared to cursor pagination when accessing deeper pages.

Cursor pagination maintains consistent performance regardless of page depth and works particularly well for real-time feeds and infinite scroll interfaces.

However, cursor pagination comes with tradeoffs. It requires careful implementation, especially around cursor encoding and handling sort orders. It also doesn't provide total page counts, making it unsuitable for interfaces that need to support paged navigation.

The choice between these approaches ultimately depends on your use case:

  • Choose cursor pagination for performance-critical APIs, real-time feeds, infinite scroll, or any scenario where users frequently access deep pages
  • Stick with offset pagination for admin interfaces, small datasets, or when you need upfront page counts

Another thing to consider: which page will your users typically land on? If most users start at the first page and rarely visit other pages, offset pagination might be sufficient. This will be the case for many applications.

Remember to use tuple comparisons and appropriate indexes to get the best performance from cursor pagination.

That's all for today.

7 Common mistakes in Dot Net — You can avoid

  There are many common mistakes made during .NET (ASP.NET, .NET Core) development, which affect performance, security, and code… Code Crack...