Featured Article

Birjob

Unification — is it a universal need or just my personal habit?

Ismat Samadov

Software Engineer & Tech Writer

9 min read
Birjob

I like challenges. Interviews are one of those challenges I enjoy: a 30-minute technical interview often teaches me more than five years at a corporate job. So I apply to a lot of roles. Most of the time I get past the first round and reach the second technical interview. Sometimes I get rejected at the first stage — often I think it’s because my salary expectations are higher than their range, or maybe the fit wasn’t right.

Keeping track of all those applications and job boards became difficult. I developed FOMO — the feeling that I might miss a good opportunity. That feeling started the whole Birjob journey.

The email scraper — useful but noisy

My first attempt was simple: an email notification system that scraped about 70 job boards and sent matching job results to users. The idea was to return only the jobs that matched each user’s keywords. A few colleagues tried it and liked the idea — but the execution was rough.

Problems:

  • No database. The system didn’t keep track of which jobs were new or old.

  • Lots of noise. Users got up to 200 job listings in one email. Imagine opening your inbox every morning and seeing 200 jobs — most of them repeats. You can’t tell what’s new or what you already saw.

This was the first lesson: delivering everything is not the same as delivering useful things. I stopped that version.

Everyday example: it was like subscribing to too many newsletters — at some point you stop opening any of them.

From emails to a website — fast prototypes, messy choices

Next, I built a website. I’m not a perfectionist when launching — I prefer to ship and learn fast. I bought several domain names during this period (some names came from ChatGPT — don’t laugh):

I built the first working version while participating in a startup cohort. In under a month I had a live site — thanks to accelerated learning and a lot of late nights. My front-end skills were limited, so the first site was basic but functional.

Tech stack (short and cheap):

  • Frontend: HTML, CSS, JavaScript, Jinja2 templates

  • Backend: Django

  • Hosting: Render.com (easy setup, about $10/month back then)

  • Database: neon.tech (Postgres, free 500 MB)

  • Scheduling / automation: GitHub Actions (not ideal for scraping schedules, but it worked)

Because neon.tech gave only 500 MB free, I used a “truncate-and-load” approach for each scrape: if the scrape succeeded I deleted old data and uploaded the new dump. This had pros and cons — it kept costs low and was simple to implement, but it erased historical data and made incremental updates harder. Practically, each scrape produced under ~50 MB of data, so I never hit a paid DB limit.

GitHub Actions automated the scraping schedule. It’s not a perfect scheduler for scraping workloads, but it served while I was small and testing ideas.

The founder trap — feature overload

Like many solo founders, I made the same mistake over and over: I tried to add too many features at once, without clear business validation.

Features I built:

  • Scraped job listing pages

  • A job posting page (so employers could post jobs)

  • ATS-like features: CV parsing, matching candidates to job descriptions

  • Candidate tracking and other advanced ATS functionality

  • Blog, user registration, in-house analytics, market trends

These features were fun to build. But I didn’t check the business side first. If I was already scraping and showing jobs for free, why would employers pay me to post? Also, I didn’t have the budget to compete with the big, older local players.

Lesson learned:

  • Focus on one core problem and solve it well.

  • Validate whether customers will pay before building expensive features.

My latest tech stack

I kept the stack small and practical so I could move fast with little money and limited frontend skills. This is what I’m using now and why:

  • Claude Code (coding assistant): Because my Next.js experience is limited, I use Claude to generate most of the code. It writes pages, components, API routes, helpers — roughly 99% of the surface code. I still review, test, and tweak what Claude produces, but it saves huge time and helps me ship features quickly.

  • Next.js (fullstack): I use Next.js for everything — frontend pages, API routes, and server-side logic. It keeps the app tidy since one framework covers both client and server work.

  • Vercel (hosting & deployment): The project is deployed on Vercel’s free Hobby plan. Deploys are automatic: push to the main branch and Vercel builds and publishes. It’s fast and frictionless for a one-person project.

  • neon.tech (Postgres database): I keep the same free Neon Postgres instance and the same truncate-and-load approach for scraped data. Each scrape replaces the old dump with a fresh dump. It’s cheap and simple while the dataset is small.

  • GitHub Actions (scheduler for scrapers): I schedule scraping runs with GitHub Actions. I use cron-style schedules in Actions to run scrapers at set times, trigger scraper workflows on demand, and chain jobs (fetch → transform → load). GitHub Actions isn’t a perfect scheduler for heavy scraping, but it’s free, reliable enough for now, and keeps everything in one place (repo + CI).

Why this works for me

  • Ship fast: Claude + Next.js lets me build features quickly without deep Next expertise.

  • Low cost: Vercel + Neon free tiers keep monthly costs near zero during testing.

  • Simple operations: One repo, one deploy pipeline, one scheduler (GitHub Actions) — less to manage.

Trade-offs / what I watch

  • AI-generated code risk: Auto-generated code can hide bugs or non-idiomatic patterns. I review changes and run basic tests.

  • Scheduler limits: GitHub Actions has minutes/usage limits and isn’t made for high-frequency, heavy scraping — I watch run times and concurrency.

  • No history with truncate-and-load: The current DB flow is cheap but loses history and makes incremental updates harder.

  • Vendor limits: Free tiers mean watching build times, DB storage, and bandwidth so I don’t hit surprises.

Small next steps I plan

  1. Add basic tests (unit + API smoke tests) to catch errors from AI-generated code.

  2. Add simple CI checks (lint, typecheck) before Vercel deploys.

  3. Monitor GitHub Actions usage and add retries/backoff to scraper jobs to avoid throttles.

  4. Track DB size and scrape payloads; migrate from truncate-and-load to incremental updates when needed.

  5. Start a small feature-flag or staging flow so I can safely test changes before they go live.

Finding the niche — leaning into what worked

The next big shift was focus. I decided to concentrate on one niche: job aggregation for the Azerbaijani market. This had two advantages:

  • Local competition was weak: most sites scraped only 2–3 sources and showed around 2,000 jobs.

  • My scraper returned 5,000+ jobs per cycle — a clear numeric advantage.

At this stage I bought new domains and tightened the product focus:

  • jobry.me — bought: Dec 3, 2024 — first focused aggregator site

  • birjob.com — bought: May 5, 2025 — final name for the single-purpose aggregator

I removed the heavy ATS features and focused on collecting and showing jobs. I still made mistakes (I added blog pages, user registration, and other extras), but I iterated faster and cut features that didn’t help.

What I removed after testing:

  • Blog page (low immediate traffic and traction)

  • Email notification service (too noisy without identity or DB)

  • In-house user analytics (replaced by Google Analytics)

  • Market trends dashboard and heavy user tracing

  • Many small pages and features that added maintenance overhead but little user value

After pruning, the product was simple: a job aggregator that surfaced many more listings than competitors.

Technical trade-offs I made

To move fast and keep costs low, I accepted several trade-offs:

  • Truncate-and-load DB: cheap and simple, but loses history and makes updates less efficient.

  • Render.com + GitHub Actions: low ops and cheap, but not the fastest or most resilient at scale.

  • Larger frontend bundles: because I’m not a frontend expert, pages can be slow. This is fine for an MVP but needs work for growth.

Each choice got me to a working product quickly. They also left technical debt and scaling questions for later.

What surprised me

  • Having more jobs mattered more than a polished UI in the early days. Users saw value in breadth.

  • Email digests were often worse than no email. Users preferred using the site to find jobs rather than reading huge digests.

  • Competing with existing job boards is more about relationships and brand than just features. Employers trust long-standing platforms.

What I still don’t know

  • Monetization: Will employers pay for featured listings? Will recruiters pay for candidate leads? Can I sell API access or data? I have ideas, but no validated revenue yet.

  • Retention: Do users come back regularly because of broader listings, or do they visit once and never return? I need retention metrics to decide.

  • Product depth: Should Birjob stay a pure aggregator, or add a few light, paid recruiter tools without becoming a full ATS? I’m unsure where the line should be.

  • Scaling pains: Which part will fail first when traffic grows — DB, scraping, hosting, frontend? I have guesses but no real stress tests.

Small experiments I’d run next (practical, low-cost)

  1. Monetization pilot: offer a featured listing package to 10–20 local employers and measure interest.

  2. Retention tracking: start storing simple user/session data (cookies or light accounts) to measure return rates and funnels.

  3. Smarter emails: move from massive digests to incremental emails like “3 new jobs since your last visit” so users aren’t overloaded.

  4. Frontend cleanup: remove unused JS, lazy-load parts of the page, reduce bundle size — these are small wins with big UX returns.

  5. Micro-ATS test: add one small recruiter feature (for example, bookmarking candidates) to see if recruiters will pay for tiny workflow gains.

Final thoughts — this is not finished

Right now Birjob is simpler and clearer than at the start. The journey taught me to reduce noise, focus on one core value, and iterate in small steps. I don’t know yet how I’ll make money from the data or how to grow users cheaply — but I do know this won’t stop. I expect the process to continue: build, learn, remove, repeat.

I keep asking myself: what is the smallest change that will move a real business metric — more weekly users, better employer conversions, or a little revenue? Which small bet will turn Birjob from a useful tool into a sustainable product?

If you were sitting across from me now, I would ask: what would make you come back to a job site? Would you pay for:

  • better filters and search,

  • curated or hand-picked roles,

  • or direct connections to employers?

Your answer would help shape the next small experiment.

https://www.birjob.com/

Published on September 21, 2025

Love this content? ☕

Your support helps me create more in-depth articles and tutorials.Every coffee counts!

Buy me a coffee
💡Support independent writing

Quick donation with QR code

Buy Me Coffee QR Code

Scan with your phone

🚀Fuel more tutorials
Keep content free
One-time support
No subscription needed
100% goes to content

Stay in the Loop 📬

Get the latest tech insights and tutorials delivered straight to your inbox

Weekly updates
No spam, ever
Unsubscribe anytime

Let's Connect 💬

Have questions or thoughts about this article? I'd love to hear from you!

Quick response
Personal reply
Privacy protected