17
30
As a seasoned backend developer with a strong background in software engineering, I leverage my expertise in programming languages, frameworks, and databases to design and implement scalable, efficient solutions. With a distinct passion for building robust and secure systems, I have cultivated a portfolio of 123 public repositories that demonstrate my commitment to open-source development. As a key member of the Eduport team, I strive to create innovative technologies that make a lasting impact. When not coding, you can find me exploring the intersection of technology and education, or simply experimenting with new tools to push the boundaries of what's possible.
Perinthalmanna, Malappuram,Kerala ,India
Eduport
Kozhikode, Kerala, India
2024 May - Present
Lamsta
Ernakulam, Kerala, India
2023 Feb - Present
Eduport
Kozhikode, Kerala, India
2024 Mar - 2024 Jun
Make-a-Ton
Kochi, Kerala, India
2022 Aug - 2024 Mar
Trebuchet Systems
Kochi, Kerala, India
2021 May - 2023 Dec
Cochin University of Science and Technology
Computer Science
2020 - 2024
Government Higher Secondary School , Cherpulassery, Palakkad
2018 - 2020
The Lazy Developerâs Guide to Automation: How I Made GitHub Work for MePicture this: Itâs another busy day at work, and Iâm juggling multiple hotfixes for our Eduport . Each fix requires creating a pull request, following the proper formatting, linking the correct ticket ID, and ensuring it goes to the right branch. Itâs not rocket science, but itâs repetitive, time-consuming, and frankly, boring. As a developer who believes in the DRY (Donât Repeat Yourself) principle, this manual PR creation process felt like a personal affront to my lazyâââI mean, efficientââânature.The Breaking PointAfter the 5th PR of the day, I had enough. My inner voice screamed, âThere has to be a better way!â Thatâs when it hit me: if I was going to be lazy, I needed to be smart about it. The best developers arenât the ones who enjoy repetitive tasks; theyâre the ones who automate them away.The Solution: GitHub Actions to the RescueI decided to create a GitHub Action that would handle the entire PR creation process automatically. The concept was simple: embed all the necessary information in the commit message, and let the automation handle the rest. Want to create a PR? Just include âpr toâ and a ticket ID in your commit message, and boomâââthe robot takes care of everything else.Hereâs what my lazy (but brilliant) solution does:Creates PRs automatically based on commit messagesExtracts ticket IDs and links them properlyHandles branch targeting with a fallback mechanismApplies a standardized PR templateManages protected branch rulesThe Magic FormatThe beauty lies in its simplicity. Instead of navigating through GitHubâs UI, all I need to do is:git commit -m "feat: add awesome feature pr to main with 12345"Thatâs it. No clicking through web interfaces, no copy-pasting ticket numbers, no filling out PR templates. The action takes care of everything, creating a perfectly formatted PR with all the necessary components.Why This Makes Me a Better DeveloperSome might say this is just lazy. I say itâs strategic laziness. By automating this process, Iâve:Eliminated human error in PR creationStandardized our teamâs PR formatSaved countless hours of manual workFreed up mental space for actual problem-solvingAnd there is an option to edit pr so this is not an end.The Ironic TruthHereâs the thing about lazy developers: we often work harder initially to work less later. The time I spent creating this GitHub Action was probably more than what Iâd spend creating PRs manually for a month. But thatâs not the point. The point is that every automated task is a small victory against tedium, a step toward a more efficient workflow.ConclusionThey say lazy people find the easiest way to do things. I prefer to think of it as finding the smartest way. In software development, automation isnât just about being lazyâââitâs about being efficient, consistent, and focusing on what truly matters: solving problems and creating value.So the next time someone calls you lazy for automating your workflow, remember: youâre not lazy, youâre just living in 2025 while theyâre stuck in the manual labor of 2020.P.S. If youâre interested in implementing this yourself, check out my GitHub Action configuration. Because sharing automation is caring⌠and also because Iâm too lazy to keep explaining how it works to everyone who asks.Originally published by a proudly lazy developer who now has more time to write blog posts about being lazy.Github Action: HereIf you enjoyed this post, connect with me on LinkedIn and follow me on GitHub for more fun project stories, creative experiments, and ideas that go beyond typical development. Letâs connect, share, and keep building cool things together!LinkedIn: sunithvsGitHub: sunithvswebsite: sunithvsFollow along for more unique project stories and engineering adventures!
So today during our sprint meeting, something pretty relatable happened. We needed to run some quick polls, but got annoyed with all the ads and complex UIs in the existing tools. You know how it goesâââyou just want to create a simple poll, but end up clicking through five screens and closing popup ads đ¤Śââď¸Live Link: polls.radr.inThe Lunch Break ChallengeI figuredâââhey, why not build something super simple? Had about an hour during lunch, so I decided to give it a shot. Having just learned about Supabaseâs real-time features, I thoughtâââperfect opportunity to put this knowledge to use.The goal was straightforward:Create polls quicklySee results in real-timeNo ads, no fluffThe Implementation JourneyStep 1: Planning with AIFirst, I turned to Claude to help me structure the project. I asked for a detailed prompt to generate this application, and it provided an extensive specification covering everything from database schema to user flows. Hereâs what the conversation looked like:Me:I need to create a poll app using next js and supabasewhere a user can create polls and share a unique url to send to other usersso when other user opens the link they will get the poll with question they can submit the poll with thire name (name mandatory) and then go to a results pagewhere live updates ( use supabase realtime) to upodate the resultsthe user created the poll can edit the poll and clear the results and after editing the poll also clear the resultscreate a detailed prompt to generate this using bolt or v0 dont write the code just a detailed promptClaude came back with a comprehensive prompt :Create a real-time polling application using Next.js and Supabase with the following specifications:Database Schema:1. Polls table - id (uuid, primary key) - creator_id (reference to users) - question (text) - created_at (timestamp) - last_edited_at (timestamp) - is_active (boolean) - url_slug (unique string for sharing)2. Options table - id (uuid, primary key) - poll_id (reference to polls) - option_text (text)3. Responses table - id (uuid, primary key) - poll_id (reference to polls) - option_id (reference to options) - respondent_name (text, not null) - submitted_at (timestamp)Features and Pages:1. Poll Creation Page ('/create') - Form to input poll question - Dynamic addition of poll options (minimum 2) - Auto-generation of unique URL slug - Success message with shareable link - Error handling for form submission2. Poll Voting Page ('/{poll-slug}') - Display poll question and options - Required name input field - Submit button for vote - Validation for duplicate votes - Redirect to results page after voting - Error handling for invalid poll URLs3. Poll Results Page ('/{poll-slug}/results') - Real-time results using Supabase subscriptions - Bar chart or visual representation of votes - List of respondents with timestamps - Percentage breakdown of each option - Total vote count4. Poll Management Page ('/manage/{poll-slug}') - Edit poll question and options - Clear all results functionality - Confirmation modals for destructive actions - Auto-update of last_edited_at - Clear results after editingTechnical Requirements:1. Authentication - Simple authentication for poll creators - Public access for poll voters2. Real-time Features - Supabase real-time subscriptions for live results - Optimistic updates for better UX3. Data Validation - Server-side validation for all inputs - Prevention of duplicate votes - Required name field validation4. URL Management - Unique, readable URL slugs - Proper error handling for invalid URLs5. State Management - Client-side state for form handling - Server state for real-time updates6. Error Handling - Graceful error handling for all API calls - User-friendly error messages - Loading states for all async operations7. Responsive Design - Mobile-first approach - Accessible UI components - Clear visual hierarchyAdditional Features:1. Copy to clipboard functionality for share URLs2. Loading skeletons for better UX3. Toast notifications for actions4. Basic analytics (vote counts, submission times)5. Rate limiting for submissionsThe application should follow these flows:1. Poll Creation Flow: - Creator fills form â Validates input â Generates URL â Shows success with share link2. Voting Flow: - User opens shared link â Enters name â Submits vote â Redirects to results3. Results View Flow: - Real-time updates of votes â Visual representation â List of respondents4. Edit Flow: - Creator accesses management page â Makes changes â Confirms â Clears existing votesEnsure all database operations are optimized and implement proper error boundaries for production use.Step 2: Database SetupI took this prompt and asked Claude to generate the actual database SQL. Within seconds, it provided the complete schema:CREATE EXTENSION IF NOT EXISTS "uuid-ossp";-- Create polls tableCREATE TABLE polls ( id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), question TEXT NOT NULL CHECK (char_length(question) > 0), created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP, last_edited_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP, is_active BOOLEAN DEFAULT true, url_slug TEXT UNIQUE NOT NULL CHECK (char_length(url_slug) > 0), CONSTRAINT valid_question_length CHECK (char_length(question) <= 500));-- Create poll options tableCREATE TABLE options ( id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), poll_id UUID NOT NULL REFERENCES polls(id) ON DELETE CASCADE, option_text TEXT NOT NULL CHECK (char_length(option_text) > 0), created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP, CONSTRAINT valid_option_length CHECK (char_length(option_text) <= 200));-- Create responses tableCREATE TABLE responses ( id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), poll_id UUID NOT NULL REFERENCES polls(id) ON DELETE CASCADE, option_id UUID NOT NULL REFERENCES options(id) ON DELETE CASCADE, respondent_name TEXT NOT NULL CHECK (char_length(respondent_name) > 0), submitted_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP, CONSTRAINT valid_respondent_name_length CHECK (char_length(respondent_name) <= 100));-- Create indexes for better query performanceCREATE INDEX idx_polls_url_slug ON polls(url_slug);CREATE INDEX idx_options_poll_id ON options(poll_id);CREATE INDEX idx_responses_poll_id ON responses(poll_id);CREATE INDEX idx_responses_option_id ON responses(option_id);CREATE INDEX idx_responses_submitted_at ON responses(submitted_at);Step 3: Project SetupWith the database schema ready, it was time to create the project:npx create-next-app -e with-supabase pollStep 3: The Magic HappensThe real fun began when I fired up WindSurf and started building. The AI prompts gave me a solid foundation, but now it was time to bring it to life. Hereâs how the conversation went:First, I started with the prompt from Claude and asked for a basic polling app. It gave me a functional app right out of the gate! The best part? The real-time features were already includedâââI just needed to enable them in Supabase. The UI wasânt that much great at first time.Just this one line, and boom! The create page went from plain to pretty, while keeping all the features intact.With This I got a great UI for the create,poll and results page. And I hosted the first version in vercel and connected polls.radr.in.I needed a landing page for it so continued the promptingAnd I got a stunning animation which you can see in the website polls.radr.in.The Fun PartThe best thing? Made it just in time before the meeting resumed, and we actually used it for the rest of our polls! Sometimes skipping a meal is worth it when youâre in the flow đ .Those hours spent learning Supabase really paid offâââfrom experimentation to actual use in just a week.The codeâs on GitHub if anyone wants to check it out. Nothing fancy, just a simple solution to an annoying problem!Github Repo: https://github.com/sunithvs/poll-flowIf you enjoyed this post, connect with me on LinkedIn and follow me on GitHub for more fun project stories, creative experiments, and ideas that go beyond typical development. Letâs connect, share, and keep building cool things together!LinkedIn: sunithvsGitHub: sunithvswebsite: sunithvsFollow along for more unique project stories and engineering adventures!
Stop context switching between branches and boost your development workflow with Gitâs hidden powerhouse feature.The Developerâs Git ChallengeIf youâre working on multiple Git branches simultaneously, youâve probably experienced this: Feature development interrupted by urgent production bugs, constant branch switching, and the dreaded git stash dance. Sound familiar? There's a powerful Git feature that could transform your workflow.At Eduport, I managed different tasks by making multiple copies of the code and switching between them. While this let me work on new features and quick fixes at the same time, it made it tricky to keep everything in sync, especially when dealing with database migrations and change in environment variables, then I found git worktreeAdvanced Git Techniques: Introducing Git WorktreeWhile most developers rely on basic Git commands, experienced users leverage Git Worktree to maintain multiple working directories connected to the same repository. This advanced technique eliminates the overhead of branch switching and context management.Why You Should Use Git WorktreeHandle multiple branches simultaneouslyZero context switching overheadMaintain separate development environmentsNo more git stash hasslesClean separation of concernsImproved code organizationWorktree ImplementationHere the list of commands used to add worktreegit clone <repository-url>cd <repository-name># Create parallel working directoriesgit worktree add ../path branch-namegit worktree add ../hotfix urgent-fixgit worktree add ../feature-1 new-developmentgit worktree add ../debug debug/production-iHandling Production EmergenciesTraditional Git Workflow:git stash save "feature work"git checkout productiongit checkout -b hotfix/bug# Fix buggit checkout maingit stash popAdvanced Worktree Workflow:git worktree add ../hotfix hotfix/bugcd ../hotfixIts super easy right đ¤ŠAdvanced Git Techniques: Quick ReferenceEssential worktree commands for improved productivity:# List all worktreesgit worktree list# Add new worktreegit worktree add ../path branch-name# Remove worktreegit worktree remove ../path# Cleanup stale worktreesgit worktree pruneRemember to keep it simple:Use clear directory namesGroup worktrees in one parent folderClean up when doneAdvanced Git workflows require practice. Start with simple scenarios and gradually incorporate more complex patterns as your team adapts.For more advanced usage and full documentation refer here.The transition from traditional branch switching to Git worktree management is an investment in your development workflow that pays dividends in productivity and organization.If find this post useful, connect with me on LinkedIn and follow me on GitHub for more fun project stories, creative experiments, and ideas that go beyond typical development. Letâs connect, share, and keep building cool things together!LinkedIn: sunithvsGitHub: sunithvswebsite: sunithFollow along for more unique project stories, productivity tools and engineering adventures!
Ever heard of a project that started with a simple quest to find a Valentine? Let me tell you about Minglikko one of the best memories from cusat, a wild ride of creativity and engineering that began in February 2022!The Origin StoryPicture this: Sahil, Rohit, Varsha, Nihal, and Sabeeh are brainstorming how to find a Valentine for Varsha. But being engineers, we couldnât just settle for a typical matchmaking approach. We thought, âWhat if we create something unique?âOur initial idea was simpleâââa Google Form will be filled by users those who want to find a Valentine and that matches people based on interests. But we wanted more. There our design wizard Amrutha Chechi enters and transformed our basic concept into an amazing website design.The ChallengeThe google form is to much limted and Airtableâs has the features but free plan has some limitations , so we decided to build a full-fledged platform where users could:Create a loginAnswer interesting questionsRemain completely anonymousMark their prioritiesPerform match making algorithmChat with matched valentine.Amrutha Dineshâs UI was so fantastic it put us under pressure to release quickly (and this was before ChatGPT existedâââimagine that! đ) Without that design, we wouldnât have envisioned such a comprehensive platform or achieved that level of outreach. Kudos to Amrutha chechi!The questions and texts were placeholders in actual website there are some changes.Launch and BuzzWe dropped a âComing Soonâ poster with the name âMinglikkoââââand boom! Curiosity exploded in and around CUSAT. Random friends and strangers were sliding into our DMs, asking, âWhat is this?â Within just hours of launching, we saw incredible tractionâââfrom initial 100 registrations in 1 hr to over 500 registrations by midnight. Turns out, everyone was desperate to find a Valentine! đOn the night of February 13th, we faced a critical challengeâââour matching algorithm wasnât ready. Despite the website launch, we continued working intensively to develop a robust matching system throghout night.The matching algorithm was a team effort, with Sahil Athrij playing a crucial role in developing the core logic. Together with Rohit, Varsha, Shaheen, Nihal Muhemmed, and Sabeeh, we crafted it.Our dedication paid off when we successfully presented a research paper about this algorithm at The Gandhigram Rural Institute , Dindigul District , with Sasi Gopalan Sir as our mentor.Questions and Gender âFeatureâWhen we released the website, some of my friends complained that there was no section for entering gender (we did this intentionally, not by accident!). As feature requests kept piling up, weâââwell, actually Rohitâââdecided to add a gender selection box on the homepage. He included an extensive list of 140 genders just for fun, but it was purely cosmetic. The matching algorithm didnât consider gender at all, and we didnât even store the selected data. đThe questions we asked were designed to be a little quirky and fun, bringing out each personâs personality. Hereâs the list:Rate your Brains.đ§ (0: Brain Potato, 5: Omniscient)Show me your biceps.đŞ(0: Pappadam, 5: Hercules)Beauty undo?(0: Mirrors scare me, 5: Cleopatra)How Charismatic you are?(0: Bed is my valentine, 5: I sell sand in Sahara)How much money you burn?đ¤(0: Starving to Death, 5: I pave golden roads)Generosity, yes rate it.đ(0: I burn orphanages, 5: Karl Marx)You die for God?(0: I am become Death. -J Robert Oppenheimer, 5: I am become Death. -Krishna)Your connection with Liberalismđ§(0: Girls? No School!!, 5: Martin Luther King)Each question had up to 5 points, but users could only give a total of 20 points across all questions. This made them think carefully about which traits to prioritise, guessing what would matter most to their perfect matchâââa fun little game of planning to find their Valentine!The Fun FinaleWe finaly released the matches and our Valentineâs mission was a success. We created a platform that helped Varsha find her Valentine and provided opportunities for others. What started as a friendâs matchmaking quest turned into a memorable campus memory.Technical JourneyAs a Django pro đ, backend development was my playground. The server-side logic and database management flowed smoothly, with API integration happening at lightning speed. However, the frontend was adifferent storyâââa real challenge that had us scratching our heads.Stepping up to the plate, SANU MUHAMMED brought his UI expertise and completely transformed our basic interface.Chat System: Implemented end-to-end encryption using Signal Protocol, ensuring user privacy and secure communicationBackend Infrastructure: Used Django for robust and fast server-side developmentMatching Algorithm: Developed a custom algorithm to connect compatible users based on their interests and preferencesAnonymous Identities: Created unique code names like âShikkari Kuyilâ to protect user anonymityReal-time Communication: Utilized Django Channels for seamless, instant messagingAWS : used aws ec2 for hosting the entire platform.Collaborative Development: Team effort involving Sahil, Rohit, Varsha, Shaheen, Nihal ,Sanu and SabeehIf you enjoyed this post, connect with me on LinkedIn and follow me on GitHub for more fun project stories, creative experiments, and ideas that go beyond typical development. Letâs connect, share, and keep building cool things together!LinkedIn: sunithvsGitHub: sunithvswebsite: sunithvsFollow along for more unique project stories and engineering adventures!
As a Django developer, you may have encountered a common performance issue called the N+1 query problem. This can severely impact the speed and efficiency of your application, especially as your codebase and data grow.In this blog post, weâll dive into what the N+1 query problem is, why itâs a problem, and how you can easily solve it using Djangoâs powerful tools.What is the N+1 Query Problem?Imagine you have a Django application with three models: Company, Employee, and Project. You want to display a list of all companies, along with the names of their employees and the projects those employees are working on.class Company(models.Model): name = models.CharField(max_length=100)class Employee(models.Model): name = models.CharField(max_length=100) company = models.ForeignKey(Company, on_delete=models.CASCADE, related_name='employees')class Project(models.Model): name = models.CharField(max_length=100) employees = models.ManyToManyField(Employee, related_name='projects')Without any optimizations, your view might look something like this:class CompanyListNoOptimisationView(View): def get(self, request): companies = Company.objects.all() # 1 query for all companies data = [] for company in companies: company_data = { 'name': company.name, 'employees': [ { 'name': employee.name, 'projects': [project.name for project in employee.projects.all()] # 1 query per employee for projects } for employee in company.employees.all() # 1 query per company for employees ] } data.append(company_data) return JsonResponse(data, safe=False)In this scenario, the initial query fetches all the companies. But then, for each company, an additional query is made to fetch the employees, and for each employee, another query is made to fetch the projects. This results in a total of N+1 queries, where N is the number of companies.Query Count BreakdownLetâs consider an example scenario with the following data:2 companies3 employees per company2 projects per employeeIn this case, the total number of queries generated would be:1 query to fetch all companies.1 query per company to fetch employees:2 companies = 2 queries for employees.1 query per employee to fetch projects:2 companies Ă 3 employees = 6 queries for projects.Total queries: 1 (company) + 2 (employees) + 6 (projects) = 9 queries.Total time: 0.03sNumber of queries: 9Query 1: SELECT "base_company"."id", "base_company"."name" FROM "base_company"Query 2: SELECT "base_employee"."id", "base_employee"."name", "base_employee"."company_id" FROM "base_employee" WHERE "base_employee"."company_id" = 1Query 3: SELECT "base_project"."id", "base_project"."name" FROM "base_project" INNER JOIN "base_project_employees" ON ("base_project"."id" = "base_project_employees"."project_id") WHERE "base_project_employees"."employee_id" = 1Query 4: SELECT "base_project"."id", "base_project"."name" FROM "base_project" INNER JOIN "base_project_employees" ON ("base_project"."id" = "base_project_employees"."project_id") WHERE "base_project_employees"."employee_id" = 2Query 5: SELECT "base_project"."id", "base_project"."name" FROM "base_project" INNER JOIN "base_project_employees" ON ("base_project"."id" = "base_project_employees"."project_id") WHERE "base_project_employees"."employee_id" = 4Query 6: SELECT "base_employee"."id", "base_employee"."name", "base_employee"."company_id" FROM "base_employee" WHERE "base_employee"."company_id" = 2Query 7: SELECT "base_project"."id", "base_project"."name" FROM "base_project" INNER JOIN "base_project_employees" ON ("base_project"."id" = "base_project_employees"."project_id") WHERE "base_project_employees"."employee_id" = 3Query 8: SELECT "base_project"."id", "base_project"."name" FROM "base_project" INNER JOIN "base_project_employees" ON ("base_project"."id" = "base_project_employees"."project_id") WHERE "base_project_employees"."employee_id" = 5Query 9: SELECT "base_project"."id", "base_project"."name" FROM "base_project" INNER JOIN "base_project_employees" ON ("base_project"."id" = "base_project_employees"."project_id") WHERE "base_project_employees"."employee_id" = 6[11/Nov/2024 19:33:30] "GET /company-list/ HTTP/1.1" 200 379This logs are generated by QueryCounterMiddleware more about that at end.The problem with this approach is that as the number of companies, employees, and projects grows, the number of queries will increase dramatically, leading to slow response times and high database load.Solving the N+1 Problem with select_related and prefetch_relatedselect_related: Used for foreign key or one-to-one relationships. It performs a SQL join and retrieves the related object in a single query.prefetch_related: Used for many-to-many and reverse foreign key relationships. It performs a second query and maps the results in Python.These tools allow us to reduce query counts by loading all related objects in bulk. Hereâs the same view, but now using select_related and prefetch_relatedclass CompanyListOptimisedView(View): def get(self, request): companies = Company.objects.prefetch_related( Prefetch('employees', queryset=Employee.objects.select_related('company').prefetch_related('projects')) ) data = [] for company in companies: company_data = { 'name': company.name, 'employees': [ { 'name': employee.name, 'projects': [project.name for project in employee.projects.all()] } for employee in company.employees.all() ] } data.append(company_data) return JsonResponse(data, safe=False)Query Count Breakdown (Optimised)Now, with the optimised code:1 query to fetch all companies.1 query to fetch all employees with their related company data (using select_related).1 query to fetch all projects for all employees (using prefetch_related).Total queries: 3Total time: 0.02sNumber of queries: 3Query 1: SELECT "base_company"."id", "base_company"."name" FROM "base_company"Query 2: SELECT "base_employee"."id", "base_employee"."name", "base_employee"."company_id", "base_company"."id", "base_company"."name" FROM "base_employee" INNER JOIN "base_company" ON ("base_employee"."company_id" = "base_company"."id") WHERE "base_employee"."company_id" IN (1, 2)Query 3: SELECT ("base_project_employees"."employee_id") AS "_prefetch_related_val_employee_id", "base_project"."id", "base_project"."name" FROM "base_project" INNER JOIN "base_project_employees" ON ("base_project"."id" = "base_project_employees"."project_id") WHERE "base_project_employees"."employee_id" IN (1, 2, 4, 3, 5, 6)[11/Nov/2024 19:40:05] "GET /company-list-optimised/ HTTP/1.1" 200 379By applying select_related and prefetch_related,we reduced the query count from 9 to 3, achieving a significant performance improvement.QueryCounterMiddlewareimport timefrom django.db import connectionfrom django.conf import settingsfrom django.utils.deprecation import MiddlewareMixinclass QueryCounterMiddleware(MiddlewareMixin): def process_request(self, request): if settings.DEBUG and settings.SHOW_RAW_QUERY or settings.SHOW_QUERY_COUNT: # Only proceed if DEBUG is True self.start_time = time.time() self.queries_before_request = len(connection.queries) def process_response(self, request, response): if settings.DEBUG and settings.SHOW_RAW_QUERY or settings.SHOW_QUERY_COUNT: # Only proceed if DEBUG is True total_time = time.time() - self.start_time queries_after_request = len(connection.queries) if settings.SHOW_QUERY_COUNT: query_count = queries_after_request - self.queries_before_request # Printing labels in yellow and values in green print(f"\033[93mTotal time:\033[0m \033[92m{total_time:.2f}s\033[0m") print(f"\033[93mNumber of queries:\033[0m \033[92m{query_count}\033[0m") if settings.SHOW_RAW_QUERY: # ANSI escape code for red color is '\033[91m' and reset color with '\033[0m' for index, query in enumerate(connection.queries[self.queries_before_request:], start=1): sql_query = query['sql'] print(f"\033[92mQuery {index}:\033[0m \033[91m{sql_query}\033[0m") return responseThe QueryCounterMiddleware is a custom Django middleware that provides a simple way to monitor the performance of your application. It works by intercepting the request-response cycle and capturing information about the database queries executed during the process.Hereâs what the middleware does:Track the Number of Queries: When a request is made, the middleware stores the initial number of executed queries. After the request is processed, it calculates the difference to determine the total number of queries executed during the request.Log the Raw SQL Queries: In addition to the query count, the middleware can also print the raw SQL queries executed during the request. This can be extremely helpful for identifying the root cause of performance issues.Measure the Total Request Time: The middleware also tracks the total time taken for the request-response cycle, providing valuable insights into the overall performance of your application.How to Use the QueryCounterMiddlewareTo use the QueryCounterMiddleware in your Django project, follow these steps:Add the Middleware to Your Project: Open your settings.py file and add the QueryCounterMiddleware to your MIDDLEWARE list:pMIDDLEWARE = [ # Other middleware classes... 'path.to.QueryCounterMiddleware', ]Configure the Middleware Behavior: You can control the behavior of the QueryCounterMiddleware by setting the following variables in your settings.py file:SHOW_QUERY_COUNT: If True, the middleware will print the total number of queries executed during the request.SHOW_RAW_QUERY: If True, the middleware will print the raw SQL queries executed during the request.Optimising your Django queries with select_related and prefetch_related can significantly improve application performance, especially when working with complex relationships. The N+1 query issue, though common, is avoidable with a few best practices, leading to faster, more efficient applications and a better user experience.If you found this post helpful, connect with me on LinkedIn and follow me on GitHub for more insights, blogs, and stories on Django, backend development, and scalable application design. Letâs connect and keep learning together!LinkedIn: sunithvsGitHub: sunithvsFollow for more Django, backend tips, and development stories!In Plain English đThank you for being a part of the In Plain English community! Before you go:Be sure to clap and follow the writer ď¸đď¸ď¸Follow us: X | LinkedIn | YouTube | Discord | Newsletter | PodcastCreate a free AI-powered blog on Differ.More content at PlainEnglish.ioOptimising Django Queries to Overcome the N+1 Problem! was originally published in Python in Plain English on Medium, where people are continuing the conversation by highlighting and responding to this story.