3
22
As a seasoned Backend Developer at Eduport with a strong background in software engineering, I bring 5+ years of experience implementing scalable, efficient solutions in Java, Python, and Spring Boot. With 145+ public repositories, I continuously push the boundaries of innovation in areas such as DevOps, Machine Learning, and cloud computing.
Perinthalmanna, Malappuram,Kerala ,India

By Sunith VS (@sunithvs_ | @truevibecoder)I built JioBase in 2 hours using Claude Code. It went viral, served 14 million requests, and survived a 200 million request DDoS attack. All of this from a solo developer with zero marketing budget.Vibe coding made that possible. But vibe coding done wrong can also get you breached, burned out, or shipping garbage at scale.Hereâs everything Iâve learned about doing it right.What Is Vibe Coding?Vibe coding is building software by describing what you want in plain language and letting AI generate the code. Youâre not writing syntax. Youâre directing. The AI writes, you guide, review, and ship.Andrej Karpathy coined the term in early 2025. Since then itâs gone from a meme to a legitimate way of building products. Founders, indie hackers, designers, and even non-developers are shipping real products with it.But hereâs what most people miss: vibe coding is not about ignoring code. Itâs about shifting your relationship with it.The Fundamentals Still MatterThis is the part nobody wants to hear.You donât need to write every line yourself. But you need to understand what the code is doing. If you canât read through a diff and know whether something is safe, youâre flying blind.You should know:How authentication works (sessions, JWTs, OAuth)What a database query is and how SQL injection happensWhat an API endpoint does and who can call itWhat environment variables are and why they must never be hardcodedBasic networking (what a proxy is, what DNS does, what CORS means)These arenât advanced topics. Theyâre the baseline. Without them, youâre not a vibe coder. Youâre just a vibe clicker hoping nothing breaks.A 2025 Veracode report found that 45% of AI-generated code introduces security vulnerabilities. AI-assisted code shows security issues 2.74x higher than human-written code. The AI doesnât know why a security check exists. It just knows how to make the error go away.The Real Risks of Vibe Coding1. Security DebtAI optimizes for making code run, not making it safe. It will remove validation checks, relax database policies, or skip auth flows to resolve a runtime error. If you donât review, you ship the hole.2. Hallucinated DependenciesLLMs will confidently reference packages that donât exist. These hallucinated package names can be registered by bad actors with malicious code. Always verify every dependency you install.3. Hardcoded SecretsAI will sometimes write API keys, database URLs, and credentials directly into code. Always check your code before committing. Use environment variables. Never push secrets to GitHub.4. No Understanding of ContextThe AI doesnât know your business logic. It doesnât know that this field should never be public, or that this endpoint should require admin access. You do. Tell it explicitly.5. Moving Too FastThe biggest vibe coding risk isnât technical. Itâs psychological. When you can ship in 2 hours, you will skip testing. You will skip reviewing. You will skip thinking. Slow down for the important parts.Best Practices for Responsible Vibe CodingBreak It Into Small TasksDonât give AI one giant prompt and hope for the best. One feature at a time. One component at a time. Review before moving on.Read the Diff Every TimeBefore you accept any change, read what changed. Not the whole file. The diff. Know what was added and what was removed. This is non-negotiable.Write Tests EarlyAsk the AI to write tests alongside the code. Not after. Alongside. If you canât test it, you donât understand it well enough to ship it.Review Security ExplicitlyWhen building anything with auth, payments, or user data, ask the AI: âWhat are the security risks in this code?â Then verify the answers yourself.Use Environment Variables AlwaysNever hardcode credentials. Never. Create a .env file. Add it to .gitignore. Check your commits before pushing.Keep a Prompt LogDocument what you asked for and what you got. This becomes your debugging trail when something breaks at 2am.Donât Let AI Touch Production DirectlyReview and test in a staging environment first. Especially database migrations. Especially auth changes. One wrong query can wipe data. AI doesnât know that your usersâ data matters.Tools I Actually UseClaude CodeMy primary tool. Itâs not just autocomplete. Itâs a full coding agent that can plan, execute, debug, and iterate. I built JioBase entirely with it. The key is treating it like a junior developer, not a magic button.Claude Cowork (Desktop)For everything that isnât pure coding. Writing, planning, drafting social posts, filling out forms, thinking through product decisions. It runs locally on your machine and feels like having a thinking partner open all the time.Skills in ClaudeClaude supports skills, which are reusable instruction sets for specific tasks. There are skills for creating docx files, pptx presentations, PDFs, spreadsheets, and more. Before creating any document or file, check if a skill exists. It will save you time and produce better output.MCP ServersMCP (Model Context Protocol) is how you connect Claude to external tools and data sources. This is where things get really powerful.Figma MCP is one of the most useful ones for builders. You can point Claude at a Figma design and ask it to generate code directly from the design. No more manually translating designs to components. The AI reads the design, understands the layout, and writes the implementation.Other useful MCPs include database connectors, GitHub, Slack, and browser automation. Every MCP server you connect expands what Claude can do without leaving the conversation.My Personal RecommendationsAlways Stay Updated with AI ToolsThe AI tooling landscape moves faster than any other space in tech right now. Tools that didnât exist 6 months ago are now essential. Make it a habit to check whatâs new every few weeks. Try things. Most tools have free tiers.Try Every New Tool for One Real ProjectDonât just read about tools. Use them. Give each new tool one real task from a project youâre actually working on. Thatâs the only way to know if itâs worth adding to your workflow.Follow the Builders, Not Just the InfluencersThe best vibe coding knowledge comes from people who are actively shipping things. Follow indie hackers, solo developers, and open source builders. They share real lessons, not polished takes.Build in PublicShip early. Share your process. Post your failures alongside your wins. The feedback you get from building in public is faster and more honest than anything else. I built JioBase and posted about it the same night. The community response shaped the product within 24 hours.Your Taste Is Your Competitive AdvantageAI can write code. It cannot have taste. Your sense of whatâs useful, whatâs elegant, what solves the real problem is the thing that makes your products different. Protect that. Develop it. Donât let speed kill it.Know When to Stop Vibing and Start ThinkingSome problems need a whiteboard, not a prompt. Architecture decisions, data model design, security reviews. These are not vibe coding moments. Sit down, think it through, then come back to the AI.The Responsible Vibe Coderâs ChecklistBefore you ship anything:Did you read through every file that changed?Are all credentials in environment variables?Is your .gitignore set up correctly?Have you tested the happy path and at least two error cases?If thereâs auth, did you verify unauthorized users canât access protected routes?If thereâs a database, did you check for injection risks?Did you verify every npm/pip package you installed is real?Do you understand what every API endpoint does and who can call it?The Mindset ShiftVibe coding is not about trusting AI blindly. Itâs about using AI as leverage while keeping your brain in the loop.The best vibe coders I know are not the ones who prompt the hardest. Theyâre the ones who review the sharpest. They ship fast because theyâve built the habits that let them move quickly without things falling apart.You donât have to choose between speed and quality. You just have to be intentional about both.Now go build something.Sunith VS is an indiehacker and vibecoder building products at the intersection of developer tools and travel tech. Creator of JioBase, DevB.ioLinks:JioBase: https://jiobase.comX: https://x.com/sunithvs_/Sunith V S: https://sunithvs.comGitHub: https://github.com/sunithvs/jiobaseBuy Me a Coffee: https://buymeacoffee.com/sunithvs

jiobase.com to fix supabase block in indiaYour Supabase app stopped working in India. Hereâs exactly what happened, why the usual fixes wonât help, and how to get your app running again for all your users in under 5 minutes.On February 24, 2026, I woke up to a dead app.My Supabase-powered project, the one Iâd been building for months, was returning ERR_CONNECTION_TIMED_OUT on every single API call. Auth broken. Database unreachable. Realtime gone. Production users in India couldn't do anything.I wasnât alone. Thousands of developers across India woke up to the same nightmare.What HappenedIndiaâs government issued a blocking order under Section 69A of the Information Technology Act. Major ISPs like Jio, Airtel, and ACT Fibernet started DNS-blocking all subdomains under *.supabase.co.Hereâs the frustrating part:supabase.com (the marketing site and dashboard) still works fine*.supabase.co (your actual API endpoint) is blockedSo you can log into the Supabase dashboard, stare at your perfectly healthy database, and watch helplessly as your production app canât reach it.The Technical DetailsThe block works through DNS poisoning. When a user on Jio tries to resolve yourproject.supabase.co, instead of getting Supabase's real IP, their ISP returns a sinkhole IP like 49.44.79.236 (owned by Reliance, not AWS where Supabase actually lives). The connection hangs until it times out.This affects every Supabase service:REST API (PostgREST) queriesAuthentication flowsFile uploads and downloads (Storage)Edge FunctionsRealtime WebSocket connectionsGraphQLThe Scale of the ProblemThis isnât a minor inconvenience. India is Supabaseâs 4th-largest market globally, accounting for roughly 9% of global traffic. Supabase received approximately 365,000 visits from India in January 2026 alone, growing 179% year-over-year.Jio alone has 500+ million subscribers. Combined with Airtel and ACT Fibernet, the block potentially affects hundreds of millions of internet users whose apps silently break.No prior notice was given. No public explanation. No timeline for resolution.As of today, the block is still active.Why the Usual Fixes Donât WorkâJust change your DNS to 1.1.1.1âThis might fix your development machine. But you canât ask your users to change their DNS settings. If you have 10,000 users on Jio, you now have 10,000 users who canât use your app. Thatâs a workaround for one person, not a solution.Some ISPs are also reportedly using deep packet inspection (DPI) alongside DNS poisoning, which means even changing DNS doesnât always work.âJust use a VPNâSame problem. You can VPN yourself out of the block, but your production appâs end users arenât going to install Cloudflare WARP just to use your to-do list app.âSwitch to a Supabase custom domainâSupabase does offer custom domains on paid plans, which would bypass the DNS block. But this requires a paid Supabase plan, DNS configuration, and doesnât cover all edge cases. If youâre on the free tier like most Indian developers and students, this isnât an option.The Real Fix: A Reverse ProxyThe solution is straightforward. Instead of your app talking directly to yourproject.supabase.co (which is blocked), route the traffic through a proxy on a domain that isn't blocked.Your App -> your-proxy.example.com -> yourproject.supabase.co (not blocked) (blocked, but proxy isn't on Indian ISP)The proxy lives on Cloudflareâs edge network. Cloudflare Workers domains arenât blocked. The proxy receives your request, forwards it to Supabase, gets the response, and sends it back. From the ISPâs perspective, your app is just talking to a Cloudflare domain.Everything works transparently. REST, Auth, Storage, Edge Functions, Realtime WebSockets. Your anon key, service role key, and all Row Level Security policies stay exactly the same.Option 1: Use JioBase (2 minute fix)JioBase is a free managed proxy I built specifically for this problem. It handles all the infrastructure so you donât have to.Step 1: Sign up at app.jiobase.comStep 2: Create an app and enter your Supabase project URL (https://yourproject.supabase.co)Step 3: Change one line of code in your app:// Beforeconst supabase = createClient( 'https://yourproject.supabase.co', 'your-anon-key');// Afterconst supabase = createClient( 'https://your-slug.jiobase.com', // Just change this URL 'your-anon-key' // Key stays the same);Step 4: Deploy. Done.JioBase proxies everything: REST, Auth, Storage, Edge Functions, Realtime WebSockets. Your Supabase project doesnât know the difference. Your users donât know the difference. Everything just works again.Option 2: Self-Host a Cloudflare WorkerIf youâd rather control the infrastructure yourself, hereâs a minimal Cloudflare Worker that does the same thing:export default { async fetch(request) { const url = new URL(request.url); const SUPABASE_URL = 'https://yourproject.supabase.co'; const target = new URL(url.pathname + url.search, SUPABASE_URL); const headers = new Headers(request.headers); headers.set('Host', new URL(SUPABASE_URL).hostname); if (request.headers.get('Upgrade') === 'websocket') { return fetch(target.toString(), { headers, method: request.method }); } const response = await fetch(target.toString(), { method: request.method, headers, body: request.body, }); const responseHeaders = new Headers(response.headers); responseHeaders.set('Access-Control-Allow-Origin', '*'); responseHeaders.set('Access-Control-Allow-Methods', 'GET, POST, PUT, PATCH, DELETE, OPTIONS'); responseHeaders.set('Access-Control-Allow-Headers', request.headers.get('Access-Control-Request-Headers') || '*'); return new Response(response.body, { status: response.status, headers: responseHeaders, }); },};Deploy with:npx wrangler deployThen update your Supabase client URL to point to your Workerâs URL (e.g., https://supabase-proxy.your-account.workers.dev).Limitations of self-hosting: Youâll need to handle rate limiting, monitoring, and WebSocket edge cases yourself. Cloudflareâs free tier gives you 100,000 requests per day, which is enough for most small to medium apps.Option 3: Supabase Custom Domain (Paid Plans)If youâre on a Supabase paid plan:Go to Project Settings > Custom DomainsAdd your own domain (e.g., api.yourdomain.com)Configure the DNS records Supabase providesUpdate your client to use the custom domainThis bypasses the block because traffic goes through your domain, not *.supabase.co. However, this requires a paid Supabase plan and DNS access to your domain.How to Test if Youâre AffectedNot sure if the block is hitting you? Run this in your terminal:nslookup yourproject.supabase.coIf you see an IP like 49.44.79.236 or any Reliance-owned IP instead of an AWS IP, you're being DNS-poisoned.You can also test from your browserâs developer console:fetch('https://yourproject.supabase.co/rest/v1/', { headers: { 'apikey': 'your-anon-key' }}).then(r => console.log('Status:', r.status)).catch(e => console.log('Blocked:', e.message));If it times out, the block is active on your network.FAQIs this legal? Can I use a proxy to bypass the block?Using a reverse proxy is a standard networking practice. Youâre not bypassing the block for end users. Youâre routing your applicationâs API traffic through your own infrastructure. This is architecturally no different from putting CloudFront or any CDN in front of your backend.Will this affect my Supabase security?No. The proxy is a transparent pass-through. All headers, tokens, and keys are forwarded unchanged. Your Row Level Security policies, auth rules, and API permissions work exactly the same way.Will there be added latency?Minimal. Cloudflare has edge nodes in Mumbai, Chennai, and other Indian cities. The proxy adds 1â5ms of overhead, unnoticeable in practice.What if the block gets lifted?You can switch back to the direct Supabase URL anytime. If youâre using JioBase, just change one line of code back. Having a proxy in place is also good insurance against future blocks.Is Firebase also blocked?There have been reports of Firebase services being affected on some ISPs. If youâre experiencing similar issues with Firebase, the same proxy approach works.Why I Built JioBaseIâm Sunith, a solo developer from India. When the Supabase block hit, it broke my own production app. I spent a weekend building a Cloudflare Worker proxy to fix it. Then I realized every other Indian developer with a Supabase app was scrambling to do the same thing.So I turned it into JioBase. A free, managed proxy that anyone can set up in 2 minutes. No infrastructure to manage. No Cloudflare account needed. Just change one URL and your app works again.Itâs open source (AGPLv3), runs entirely on Cloudflareâs edge network, and itâs free because no developer should have to pay to fix someone elseâs problem.If JioBase helps you, consider buying me a coffee. I pay the Cloudflare bills out of my own salary, and every bit of support helps keep the service running for everyone.Links:JioBase: https://jiobase.comSunith V S: https://sunithvs.comGitHub: https://github.com/sunithvs/jiobaseSupabase GitHub Issue: #43142Buy Me a Coffee: https://buymeacoffee.com/sunithvs

The Lazy Developerâs Guide to Automation: How I Made GitHub Work for MePicture this: Itâs another busy day at work, and Iâm juggling multiple hotfixes for our Eduport . Each fix requires creating a pull request, following the proper formatting, linking the correct ticket ID, and ensuring it goes to the right branch. Itâs not rocket science, but itâs repetitive, time-consuming, and frankly, boring. As a developer who believes in the DRY (Donât Repeat Yourself) principle, this manual PR creation process felt like a personal affront to my lazyâââI mean, efficientââânature.The Breaking PointAfter the 5th PR of the day, I had enough. My inner voice screamed, âThere has to be a better way!â Thatâs when it hit me: if I was going to be lazy, I needed to be smart about it. The best developers arenât the ones who enjoy repetitive tasks; theyâre the ones who automate them away.The Solution: GitHub Actions to the RescueI decided to create a GitHub Action that would handle the entire PR creation process automatically. The concept was simple: embed all the necessary information in the commit message, and let the automation handle the rest. Want to create a PR? Just include âpr toâ and a ticket ID in your commit message, and boomâââthe robot takes care of everything else.Hereâs what my lazy (but brilliant) solution does:Creates PRs automatically based on commit messagesExtracts ticket IDs and links them properlyHandles branch targeting with a fallback mechanismApplies a standardized PR templateManages protected branch rulesThe Magic FormatThe beauty lies in its simplicity. Instead of navigating through GitHubâs UI, all I need to do is:git commit -m "feat: add awesome feature pr to main with 12345"Thatâs it. No clicking through web interfaces, no copy-pasting ticket numbers, no filling out PR templates. The action takes care of everything, creating a perfectly formatted PR with all the necessary components.Why This Makes Me a Better DeveloperSome might say this is just lazy. I say itâs strategic laziness. By automating this process, Iâve:Eliminated human error in PR creationStandardized our teamâs PR formatSaved countless hours of manual workFreed up mental space for actual problem-solvingAnd there is an option to edit pr so this is not an end.The Ironic TruthHereâs the thing about lazy developers: we often work harder initially to work less later. The time I spent creating this GitHub Action was probably more than what Iâd spend creating PRs manually for a month. But thatâs not the point. The point is that every automated task is a small victory against tedium, a step toward a more efficient workflow.ConclusionThey say lazy people find the easiest way to do things. I prefer to think of it as finding the smartest way. In software development, automation isnât just about being lazyâââitâs about being efficient, consistent, and focusing on what truly matters: solving problems and creating value.So the next time someone calls you lazy for automating your workflow, remember: youâre not lazy, youâre just living in 2025 while theyâre stuck in the manual labor of 2020.P.S. If youâre interested in implementing this yourself, check out my GitHub Action configuration. Because sharing automation is caring⌠and also because Iâm too lazy to keep explaining how it works to everyone who asks.Originally published by a proudly lazy developer who now has more time to write blog posts about being lazy.Github Action: HereIf you enjoyed this post, connect with me on LinkedIn and follow me on GitHub for more fun project stories, creative experiments, and ideas that go beyond typical development. Letâs connect, share, and keep building cool things together!LinkedIn: sunithvsGitHub: sunithvswebsite: sunithFollow along for more unique project stories and engineering adventures!

So today during our sprint meeting, something pretty relatable happened. We needed to run some quick polls, but got annoyed with all the ads and complex UIs in the existing tools. You know how it goesâââyou just want to create a simple poll, but end up clicking through five screens and closing popup ads đ¤Śââď¸Live Link: polls.radr.inThe Lunch Break ChallengeI figuredâââhey, why not build something super simple? Had about an hour during lunch, so I decided to give it a shot. Having just learned about Supabaseâs real-time features, I thoughtâââperfect opportunity to put this knowledge to use.The goal was straightforward:Create polls quicklySee results in real-timeNo ads, no fluffThe Implementation JourneyStep 1: Planning with AIFirst, I turned to Claude to help me structure the project. I asked for a detailed prompt to generate this application, and it provided an extensive specification covering everything from database schema to user flows. Hereâs what the conversation looked like:Me:I need to create a poll app using next js and supabasewhere a user can create polls and share a unique url to send to other usersso when other user opens the link they will get the poll with question they can submit the poll with thire name (name mandatory) and then go to a results pagewhere live updates ( use supabase realtime) to upodate the resultsthe user created the poll can edit the poll and clear the results and after editing the poll also clear the resultscreate a detailed prompt to generate this using bolt or v0 dont write the code just a detailed promptClaude came back with a comprehensive prompt :Create a real-time polling application using Next.js and Supabase with the following specifications:Database Schema:1. Polls table - id (uuid, primary key) - creator_id (reference to users) - question (text) - created_at (timestamp) - last_edited_at (timestamp) - is_active (boolean) - url_slug (unique string for sharing)2. Options table - id (uuid, primary key) - poll_id (reference to polls) - option_text (text)3. Responses table - id (uuid, primary key) - poll_id (reference to polls) - option_id (reference to options) - respondent_name (text, not null) - submitted_at (timestamp)Features and Pages:1. Poll Creation Page ('/create') - Form to input poll question - Dynamic addition of poll options (minimum 2) - Auto-generation of unique URL slug - Success message with shareable link - Error handling for form submission2. Poll Voting Page ('/{poll-slug}') - Display poll question and options - Required name input field - Submit button for vote - Validation for duplicate votes - Redirect to results page after voting - Error handling for invalid poll URLs3. Poll Results Page ('/{poll-slug}/results') - Real-time results using Supabase subscriptions - Bar chart or visual representation of votes - List of respondents with timestamps - Percentage breakdown of each option - Total vote count4. Poll Management Page ('/manage/{poll-slug}') - Edit poll question and options - Clear all results functionality - Confirmation modals for destructive actions - Auto-update of last_edited_at - Clear results after editingTechnical Requirements:1. Authentication - Simple authentication for poll creators - Public access for poll voters2. Real-time Features - Supabase real-time subscriptions for live results - Optimistic updates for better UX3. Data Validation - Server-side validation for all inputs - Prevention of duplicate votes - Required name field validation4. URL Management - Unique, readable URL slugs - Proper error handling for invalid URLs5. State Management - Client-side state for form handling - Server state for real-time updates6. Error Handling - Graceful error handling for all API calls - User-friendly error messages - Loading states for all async operations7. Responsive Design - Mobile-first approach - Accessible UI components - Clear visual hierarchyAdditional Features:1. Copy to clipboard functionality for share URLs2. Loading skeletons for better UX3. Toast notifications for actions4. Basic analytics (vote counts, submission times)5. Rate limiting for submissionsThe application should follow these flows:1. Poll Creation Flow: - Creator fills form â Validates input â Generates URL â Shows success with share link2. Voting Flow: - User opens shared link â Enters name â Submits vote â Redirects to results3. Results View Flow: - Real-time updates of votes â Visual representation â List of respondents4. Edit Flow: - Creator accesses management page â Makes changes â Confirms â Clears existing votesEnsure all database operations are optimized and implement proper error boundaries for production use.Step 2: Database SetupI took this prompt and asked Claude to generate the actual database SQL. Within seconds, it provided the complete schema:CREATE EXTENSION IF NOT EXISTS "uuid-ossp";-- Create polls tableCREATE TABLE polls ( id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), question TEXT NOT NULL CHECK (char_length(question) > 0), created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP, last_edited_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP, is_active BOOLEAN DEFAULT true, url_slug TEXT UNIQUE NOT NULL CHECK (char_length(url_slug) > 0), CONSTRAINT valid_question_length CHECK (char_length(question) <= 500));-- Create poll options tableCREATE TABLE options ( id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), poll_id UUID NOT NULL REFERENCES polls(id) ON DELETE CASCADE, option_text TEXT NOT NULL CHECK (char_length(option_text) > 0), created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP, CONSTRAINT valid_option_length CHECK (char_length(option_text) <= 200));-- Create responses tableCREATE TABLE responses ( id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), poll_id UUID NOT NULL REFERENCES polls(id) ON DELETE CASCADE, option_id UUID NOT NULL REFERENCES options(id) ON DELETE CASCADE, respondent_name TEXT NOT NULL CHECK (char_length(respondent_name) > 0), submitted_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP, CONSTRAINT valid_respondent_name_length CHECK (char_length(respondent_name) <= 100));-- Create indexes for better query performanceCREATE INDEX idx_polls_url_slug ON polls(url_slug);CREATE INDEX idx_options_poll_id ON options(poll_id);CREATE INDEX idx_responses_poll_id ON responses(poll_id);CREATE INDEX idx_responses_option_id ON responses(option_id);CREATE INDEX idx_responses_submitted_at ON responses(submitted_at);Step 3: Project SetupWith the database schema ready, it was time to create the project:npx create-next-app -e with-supabase pollStep 3: The Magic HappensThe real fun began when I fired up WindSurf and started building. The AI prompts gave me a solid foundation, but now it was time to bring it to life. Hereâs how the conversation went:First, I started with the prompt from Claude and asked for a basic polling app. It gave me a functional app right out of the gate! The best part? The real-time features were already includedâââI just needed to enable them in Supabase. The UI wasânt that much great at first time.Just this one line, and boom! The create page went from plain to pretty, while keeping all the features intact.With This I got a great UI for the create,poll and results page. And I hosted the first version in vercel and connected polls.radr.in.I needed a landing page for it so continued the promptingAnd I got a stunning animation which you can see in the website polls.radr.in.The Fun PartThe best thing? Made it just in time before the meeting resumed, and we actually used it for the rest of our polls! Sometimes skipping a meal is worth it when youâre in the flow đ .Those hours spent learning Supabase really paid offâââfrom experimentation to actual use in just a week.The codeâs on GitHub if anyone wants to check it out. Nothing fancy, just a simple solution to an annoying problem!Github Repo: https://github.com/sunithvs/poll-flowIf you enjoyed this post, connect with me on LinkedIn and follow me on GitHub for more fun project stories, creative experiments, and ideas that go beyond typical development. Letâs connect, share, and keep building cool things together!LinkedIn: sunithvsGitHub: sunithvswebsite: sunithFollow along for more unique project stories and engineering adventures!

Stop context switching between branches and boost your development workflow with Gitâs hidden powerhouse feature.The Developerâs Git ChallengeIf youâre working on multiple Git branches simultaneously, youâve probably experienced this: Feature development interrupted by urgent production bugs, constant branch switching, and the dreaded git stash dance. Sound familiar? There's a powerful Git feature that could transform your workflow.At Eduport, I managed different tasks by making multiple copies of the code and switching between them. While this let me work on new features and quick fixes at the same time, it made it tricky to keep everything in sync, especially when dealing with database migrations and change in environment variables, then I found git worktreeAdvanced Git Techniques: Introducing Git WorktreeWhile most developers rely on basic Git commands, experienced users leverage Git Worktree to maintain multiple working directories connected to the same repository. This advanced technique eliminates the overhead of branch switching and context management.Why You Should Use Git WorktreeHandle multiple branches simultaneouslyZero context switching overheadMaintain separate development environmentsNo more git stash hasslesClean separation of concernsImproved code organizationWorktree ImplementationHere the list of commands used to add worktreegit clone <repository-url>cd <repository-name># Create parallel working directoriesgit worktree add ../path branch-namegit worktree add ../hotfix urgent-fixgit worktree add ../feature-1 new-developmentgit worktree add ../debug debug/production-iHandling Production EmergenciesTraditional Git Workflow:git stash save "feature work"git checkout productiongit checkout -b hotfix/bug# Fix buggit checkout maingit stash popAdvanced Worktree Workflow:git worktree add ../hotfix hotfix/bugcd ../hotfixIts super easy right đ¤ŠAdvanced Git Techniques: Quick ReferenceEssential worktree commands for improved productivity:# List all worktreesgit worktree list# Add new worktreegit worktree add ../path branch-name# Remove worktreegit worktree remove ../path# Cleanup stale worktreesgit worktree pruneRemember to keep it simple:Use clear directory namesGroup worktrees in one parent folderClean up when doneAdvanced Git workflows require practice. Start with simple scenarios and gradually incorporate more complex patterns as your team adapts.For more advanced usage and full documentation refer here.The transition from traditional branch switching to Git worktree management is an investment in your development workflow that pays dividends in productivity and organization.If find this post useful, connect with me on LinkedIn and follow me on GitHub for more fun project stories, creative experiments, and ideas that go beyond typical development. Letâs connect, share, and keep building cool things together!LinkedIn: sunithvsGitHub: sunithvswebsite: sunithFollow along for more unique project stories, productivity tools and engineering adventures!

Ever heard of a project that started with a simple quest to find a Valentine? Let me tell you about Minglikko one of the best memories from cusat, a wild ride of creativity and engineering that began in February 2022!The Origin StoryPicture this: Sahil, Rohit, Varsha, Nihal, and Sabeeh are brainstorming how to find a Valentine for Varsha. But being engineers, we couldnât just settle for a typical matchmaking approach. We thought, âWhat if we create something unique?âOur initial idea was simpleâââa Google Form will be filled by users those who want to find a Valentine and that matches people based on interests. But we wanted more. There our design wizard Amrutha Chechi enters and transformed our basic concept into an amazing website design.The ChallengeThe google form is to much limted and Airtableâs has the features but free plan has some limitations , so we decided to build a full-fledged platform where users could:Create a loginAnswer interesting questionsRemain completely anonymousMark their prioritiesPerform match making algorithmChat with matched valentine.Amrutha Dineshâs UI was so fantastic it put us under pressure to release quickly (and this was before ChatGPT existedâââimagine that! đ) Without that design, we wouldnât have envisioned such a comprehensive platform or achieved that level of outreach. Kudos to Amrutha chechi!The questions and texts were placeholders in actual website there are some changes.Launch and BuzzWe dropped a âComing Soonâ poster with the name âMinglikkoââââand boom! Curiosity exploded in and around CUSAT. Random friends and strangers were sliding into our DMs, asking, âWhat is this?â Within just hours of launching, we saw incredible tractionâââfrom initial 100 registrations in 1 hr to over 500 registrations by midnight. Turns out, everyone was desperate to find a Valentine! đOn the night of February 13th, we faced a critical challengeâââour matching algorithm wasnât ready. Despite the website launch, we continued working intensively to develop a robust matching system throghout night.The matching algorithm was a team effort, with Sahil Athrij playing a crucial role in developing the core logic. Together with Rohit, Varsha, Shaheen, Nihal Muhemmed, and Sabeeh, we crafted it.Our dedication paid off when we successfully presented a research paper about this algorithm at The Gandhigram Rural Institute , Dindigul District , with Sasi Gopalan Sir as our mentor.Questions and Gender âFeatureâWhen we released the website, some of my friends complained that there was no section for entering gender (we did this intentionally, not by accident!). As feature requests kept piling up, weâââwell, actually Rohitâââdecided to add a gender selection box on the homepage. He included an extensive list of 140 genders just for fun, but it was purely cosmetic. The matching algorithm didnât consider gender at all, and we didnât even store the selected data. đThe questions we asked were designed to be a little quirky and fun, bringing out each personâs personality. Hereâs the list:Rate your Brains.đ§ (0: Brain Potato, 5: Omniscient)Show me your biceps.đŞ(0: Pappadam, 5: Hercules)Beauty undo?(0: Mirrors scare me, 5: Cleopatra)How Charismatic you are?(0: Bed is my valentine, 5: I sell sand in Sahara)How much money you burn?đ¤(0: Starving to Death, 5: I pave golden roads)Generosity, yes rate it.đ(0: I burn orphanages, 5: Karl Marx)You die for God?(0: I am become Death. -J Robert Oppenheimer, 5: I am become Death. -Krishna)Your connection with Liberalismđ§(0: Girls? No School!!, 5: Martin Luther King)Each question had up to 5 points, but users could only give a total of 20 points across all questions. This made them think carefully about which traits to prioritise, guessing what would matter most to their perfect matchâââa fun little game of planning to find their Valentine!The Fun FinaleWe finaly released the matches and our Valentineâs mission was a success. We created a platform that helped Varsha find her Valentine and provided opportunities for others. What started as a friendâs matchmaking quest turned into a memorable campus memory.Technical JourneyAs a Django pro đ, backend development was my playground. The server-side logic and database management flowed smoothly, with API integration happening at lightning speed. However, the frontend was adifferent storyâââa real challenge that had us scratching our heads.Stepping up to the plate, SANU MUHAMMED brought his UI expertise and completely transformed our basic interface.Chat System: Implemented end-to-end encryption using Signal Protocol, ensuring user privacy and secure communicationBackend Infrastructure: Used Django for robust and fast server-side developmentMatching Algorithm: Developed a custom algorithm to connect compatible users based on their interests and preferencesAnonymous Identities: Created unique code names like âShikkari Kuyilâ to protect user anonymityReal-time Communication: Utilized Django Channels for seamless, instant messagingAWS : used aws ec2 for hosting the entire platform.Collaborative Development: Team effort involving Sahil, Rohit, Varsha, Shaheen, Nihal ,Sanu and SabeehIf you enjoyed this post, connect with me on LinkedIn and follow me on GitHub for more fun project stories, creative experiments, and ideas that go beyond typical development. Letâs connect, share, and keep building cool things together!LinkedIn: sunithvsGitHub: sunithvswebsite: sunithvsFollow along for more unique project stories and engineering adventures!

As a Django developer, you may have encountered a common performance issue called the N+1 query problem. This can severely impact the speed and efficiency of your application, especially as your codebase and data grow.In this blog post, weâll dive into what the N+1 query problem is, why itâs a problem, and how you can easily solve it using Djangoâs powerful tools.What is the N+1 Query Problem?Imagine you have a Django application with three models: Company, Employee, and Project. You want to display a list of all companies, along with the names of their employees and the projects those employees are working on.class Company(models.Model): name = models.CharField(max_length=100)class Employee(models.Model): name = models.CharField(max_length=100) company = models.ForeignKey(Company, on_delete=models.CASCADE, related_name='employees')class Project(models.Model): name = models.CharField(max_length=100) employees = models.ManyToManyField(Employee, related_name='projects')Without any optimizations, your view might look something like this:class CompanyListNoOptimisationView(View): def get(self, request): companies = Company.objects.all() # 1 query for all companies data = [] for company in companies: company_data = { 'name': company.name, 'employees': [ { 'name': employee.name, 'projects': [project.name for project in employee.projects.all()] # 1 query per employee for projects } for employee in company.employees.all() # 1 query per company for employees ] } data.append(company_data) return JsonResponse(data, safe=False)In this scenario, the initial query fetches all the companies. But then, for each company, an additional query is made to fetch the employees, and for each employee, another query is made to fetch the projects. This results in a total of N+1 queries, where N is the number of companies.Query Count BreakdownLetâs consider an example scenario with the following data:2 companies3 employees per company2 projects per employeeIn this case, the total number of queries generated would be:1 query to fetch all companies.1 query per company to fetch employees:2 companies = 2 queries for employees.1 query per employee to fetch projects:2 companies Ă 3 employees = 6 queries for projects.Total queries: 1 (company) + 2 (employees) + 6 (projects) = 9 queries.Total time: 0.03sNumber of queries: 9Query 1: SELECT "base_company"."id", "base_company"."name" FROM "base_company"Query 2: SELECT "base_employee"."id", "base_employee"."name", "base_employee"."company_id" FROM "base_employee" WHERE "base_employee"."company_id" = 1Query 3: SELECT "base_project"."id", "base_project"."name" FROM "base_project" INNER JOIN "base_project_employees" ON ("base_project"."id" = "base_project_employees"."project_id") WHERE "base_project_employees"."employee_id" = 1Query 4: SELECT "base_project"."id", "base_project"."name" FROM "base_project" INNER JOIN "base_project_employees" ON ("base_project"."id" = "base_project_employees"."project_id") WHERE "base_project_employees"."employee_id" = 2Query 5: SELECT "base_project"."id", "base_project"."name" FROM "base_project" INNER JOIN "base_project_employees" ON ("base_project"."id" = "base_project_employees"."project_id") WHERE "base_project_employees"."employee_id" = 4Query 6: SELECT "base_employee"."id", "base_employee"."name", "base_employee"."company_id" FROM "base_employee" WHERE "base_employee"."company_id" = 2Query 7: SELECT "base_project"."id", "base_project"."name" FROM "base_project" INNER JOIN "base_project_employees" ON ("base_project"."id" = "base_project_employees"."project_id") WHERE "base_project_employees"."employee_id" = 3Query 8: SELECT "base_project"."id", "base_project"."name" FROM "base_project" INNER JOIN "base_project_employees" ON ("base_project"."id" = "base_project_employees"."project_id") WHERE "base_project_employees"."employee_id" = 5Query 9: SELECT "base_project"."id", "base_project"."name" FROM "base_project" INNER JOIN "base_project_employees" ON ("base_project"."id" = "base_project_employees"."project_id") WHERE "base_project_employees"."employee_id" = 6[11/Nov/2024 19:33:30] "GET /company-list/ HTTP/1.1" 200 379This logs are generated by QueryCounterMiddleware more about that at end.The problem with this approach is that as the number of companies, employees, and projects grows, the number of queries will increase dramatically, leading to slow response times and high database load.Solving the N+1 Problem with select_related and prefetch_relatedselect_related: Used for foreign key or one-to-one relationships. It performs a SQL join and retrieves the related object in a single query.prefetch_related: Used for many-to-many and reverse foreign key relationships. It performs a second query and maps the results in Python.These tools allow us to reduce query counts by loading all related objects in bulk. Hereâs the same view, but now using select_related and prefetch_relatedclass CompanyListOptimisedView(View): def get(self, request): companies = Company.objects.prefetch_related( Prefetch('employees', queryset=Employee.objects.select_related('company').prefetch_related('projects')) ) data = [] for company in companies: company_data = { 'name': company.name, 'employees': [ { 'name': employee.name, 'projects': [project.name for project in employee.projects.all()] } for employee in company.employees.all() ] } data.append(company_data) return JsonResponse(data, safe=False)Query Count Breakdown (Optimised)Now, with the optimised code:1 query to fetch all companies.1 query to fetch all employees with their related company data (using select_related).1 query to fetch all projects for all employees (using prefetch_related).Total queries: 3Total time: 0.02sNumber of queries: 3Query 1: SELECT "base_company"."id", "base_company"."name" FROM "base_company"Query 2: SELECT "base_employee"."id", "base_employee"."name", "base_employee"."company_id", "base_company"."id", "base_company"."name" FROM "base_employee" INNER JOIN "base_company" ON ("base_employee"."company_id" = "base_company"."id") WHERE "base_employee"."company_id" IN (1, 2)Query 3: SELECT ("base_project_employees"."employee_id") AS "_prefetch_related_val_employee_id", "base_project"."id", "base_project"."name" FROM "base_project" INNER JOIN "base_project_employees" ON ("base_project"."id" = "base_project_employees"."project_id") WHERE "base_project_employees"."employee_id" IN (1, 2, 4, 3, 5, 6)[11/Nov/2024 19:40:05] "GET /company-list-optimised/ HTTP/1.1" 200 379By applying select_related and prefetch_related,we reduced the query count from 9 to 3, achieving a significant performance improvement.QueryCounterMiddlewareimport timefrom django.db import connectionfrom django.conf import settingsfrom django.utils.deprecation import MiddlewareMixinclass QueryCounterMiddleware(MiddlewareMixin): def process_request(self, request): if settings.DEBUG and settings.SHOW_RAW_QUERY or settings.SHOW_QUERY_COUNT: # Only proceed if DEBUG is True self.start_time = time.time() self.queries_before_request = len(connection.queries) def process_response(self, request, response): if settings.DEBUG and settings.SHOW_RAW_QUERY or settings.SHOW_QUERY_COUNT: # Only proceed if DEBUG is True total_time = time.time() - self.start_time queries_after_request = len(connection.queries) if settings.SHOW_QUERY_COUNT: query_count = queries_after_request - self.queries_before_request # Printing labels in yellow and values in green print(f"\033[93mTotal time:\033[0m \033[92m{total_time:.2f}s\033[0m") print(f"\033[93mNumber of queries:\033[0m \033[92m{query_count}\033[0m") if settings.SHOW_RAW_QUERY: # ANSI escape code for red color is '\033[91m' and reset color with '\033[0m' for index, query in enumerate(connection.queries[self.queries_before_request:], start=1): sql_query = query['sql'] print(f"\033[92mQuery {index}:\033[0m \033[91m{sql_query}\033[0m") return responseThe QueryCounterMiddleware is a custom Django middleware that provides a simple way to monitor the performance of your application. It works by intercepting the request-response cycle and capturing information about the database queries executed during the process.Hereâs what the middleware does:Track the Number of Queries: When a request is made, the middleware stores the initial number of executed queries. After the request is processed, it calculates the difference to determine the total number of queries executed during the request.Log the Raw SQL Queries: In addition to the query count, the middleware can also print the raw SQL queries executed during the request. This can be extremely helpful for identifying the root cause of performance issues.Measure the Total Request Time: The middleware also tracks the total time taken for the request-response cycle, providing valuable insights into the overall performance of your application.How to Use the QueryCounterMiddlewareTo use the QueryCounterMiddleware in your Django project, follow these steps:Add the Middleware to Your Project: Open your settings.py file and add the QueryCounterMiddleware to your MIDDLEWARE list:pMIDDLEWARE = [ # Other middleware classes... 'path.to.QueryCounterMiddleware', ]Configure the Middleware Behavior: You can control the behavior of the QueryCounterMiddleware by setting the following variables in your settings.py file:SHOW_QUERY_COUNT: If True, the middleware will print the total number of queries executed during the request.SHOW_RAW_QUERY: If True, the middleware will print the raw SQL queries executed during the request.Optimising your Django queries with select_related and prefetch_related can significantly improve application performance, especially when working with complex relationships. The N+1 query issue, though common, is avoidable with a few best practices, leading to faster, more efficient applications and a better user experience.If you found this post helpful, connect with me on LinkedIn and follow me on GitHub for more insights, blogs, and stories on Django, backend development, and scalable application design. Letâs connect and keep learning together!LinkedIn: sunithvsGitHub: sunithvsFollow for more Django, backend tips, and development stories!In Plain English đThank you for being a part of the In Plain English community! Before you go:Be sure to clap and follow the writer ď¸đď¸ď¸Follow us: X | LinkedIn | YouTube | Discord | Newsletter | PodcastCreate a free AI-powered blog on Differ.More content at PlainEnglish.ioOptimising Django Queries to Overcome the N+1 Problem! was originally published in Python in Plain English on Medium, where people are continuing the conversation by highlighting and responding to this story.
