Most job hunters know the pain of going through all the different job portals
and applying for jobs one by one. In most cases, you write down your name,
location, education, etc., and provide your resume. And in most cases nowadays,
the resume will be parsed by an Applicant Tracking System (ATS) before any human
sees it.
There are just so many problems and unnecessary hassles with this approach.
Let's discuss them briefly, and I will propose a solution that will make life
easier for job hunters and job posters.
The Problems
- Repetition: You are repeating the same information about yourself again and
again. Apply to 50 companies, and you'll have to write down your name,
education, etc., 50 times.
- Signing up for a new job portal: This is just horrible. When you see a job
post on a site where you don't have an account, and you sign in, many cases
will ask for the same information, such as your name, education, picture,
skills, etc. It will take you almost 5-10+ minutes to fill out all this
information. Signing up for a different service means filling out the same
information again. And what about privacy?
- ATS issues: Many sites have options to upload your resume, which will fill out
most of the form for you, such as your name and email, etc. Speaking from
experience, most of them do not work 100% correctly. I have to manually fix
some and add some information. And if you have a two-column resume, it's even
harder for the ATS to parse. Maybe with tools like Large Language Models
(LLM), this problem will improve.
The Solution
Now that we know the problems, what can be a possible solution? Let me propose
one. It's a simple structured text file, such as YAML. It will look something
like this, but it has to be a standard.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89 | personal_details:
first_name: Md
middle_name: Sujauddin
last_name: Sekh
preferred_first_name: Sujauddin
gender: Male
race: Indian
location: West Bengal, India
contact:
country_code: "+91"
phone_number: 9999999999
email: ssujj@protonmail.com
website: https://sujauddin.me
educations:
- name: Narula Institute of Technology
location: West Bengal, India
from: "2021"
to: "2025"
degree_name: B.Tech.
field_of_study: "Computer Science Engineering (Specialization in AIML)"
achievements:
- Ranked 2nd in debugging competition
links:
- name: Leetcode
url: https://leetcode.com/sujaudd1n
url_text: leetcode.com/sujaudd1n
- name: Github
url: https://github.com/sujaudd1n
url_text: github.com/sujaudd1n
- name: Linkedin
url: https://linkedin.com/in/sujaudd1n
url_text: linkedin.com/in/sujaudd1n
- name: sujauddin.me/articles
url: https://sujauddin.me/articles
url_text: sujauddin.me/articles
targets:
swe:
professional_summary:
title: Junior Software Engineer
summary: >
Aspiring software engineer with experience in designing and developing full-stack AI-integrated applications. As an engineering intern,
I contributed to software projects, leveraging technical expertise to drive innovative solutions and tackle complex real-world problems.
skills:
languages: [Python, JavaScript, C, C++, SQL, HTML, CSS]
libraries: [React, NextJS, Django, FastAPI, Replicate]
'Protocols & APIs': [HTTP, REST APIs, OAuth, OIDC, OpenAPI]
Databases: [SQLite, PostgreSQL, MongoDB]
Tools: [Git, BASH, Virtual Machine, Docker]
Experience: [Linux, Algorithms, Fullstack Development, Open Source, CI/CD, Agile, Object Oriented Programming, GenAI, Machine Learning]
work_experience:
- company_name: Genovation Solutions
location: West Bengal, India
job_title: Junior Backend Developer Intern
from: Jul 2025
to: Sep 2025
contributions:
- Engineered a scalable backend system for IoT applications, featuring a real-time database that facilitated seamless data ingestion and processing.
- Integrated Large Language Models (LLMs) to automate data interpretation, enhancing data comprehension and actionable insights by 50%.
- Developed and deployed full-stack AI applications using technologies like FastAPI, Replicate, Gradio, and Streamlit, ensuring robust performance and user-friendly interfaces.
- Successfully migrated a Gradio application to Streamlit, improving maintainability and deployment flexibility while preserving all original UI components and functionality.
- Boosted research output by implementing integration between autonomous AI agents.
- company_name: Zapuza Technologies
location: West Bengal, India
job_title: Junior Python Developer Intern
from: Nov 2024
to: Apr 2025
contributions:
- Enhanced employee management by integrating attendance tracking with location-based monitoring, resulting in a 20% increase in system functionality and streamlined administrative processes.
- Improved data-driven decision-making by designing and developing a financial dashboard that delivered a 40% enhancement in data visualization, enabling quicker insights.
- Fortified project security by implementing a role-based access control system, ensuring secure authorization and authentication protocols to protect sensitive data, mitigating potential breaches.
projects:
- name: QuickNote
description: AI-powered article writer SaaS.
links:
- https://makequicknote.vercel.app
techstack: [Python, Django, NextJS, GeminiAI]
details:
- Developed a cutting-edge SaaS platform for productivity, leveraging AI-driven technology to generate high-quality PDF articles from user-provided topics or prompts.
- Utilized Amazon S3 cloud storage to store and serve generated PDF files, ensuring secure and efficient data management.
- Successfully integrated Gemini’s GenAI capabilities to power the article creation process, enabling users to access high-quality, AI-generated content.
misc:
qnas:
'Can you relocate?': 'Yes'
'What is your preferred location?': Remote
|
This is nothing new. There are similar things in the field of software engineering. One of those things, almost all backend engineers are familiar with, is the OpenAPI specification, where we specify API details in a structured manner. Because it's a standard, there's no ambiguity in parsing it, and it has a vast ecosystem of tools.
We can add additional fields, such as photo_url, and so on. Not only that, but we can also add language-specific, company-specific, or anything-specific information here. For example, for a JavaScript role, you can put all the information related to JavaScript in a JavaScript key.
This simple file solves all the problems.
- You no longer have to repeat anything. The job poster only needs one HTML input element to upload this file or a link to it.
- When you sign up for a new job portal, just give them this file; it has all the information they need, including previous experiences and all past academic details.
- Parsing has never been easier. All major programming languages have libraries to parse YAML, JSON, etc. There will be no ambiguity.
It doesn't end here. There are more benefits.
- You can still create a resume from the YAML file. Tools such as resumaker will create resumes for you from these types of YAML files.
- You can have a massive YAML file where everything is included, and when you upload it, the company will only extract the information it needs from the relevant keys.
- Finding candidates becomes easier for companies. Sites have
robots.txt files containing rules for crawlers. You can put your resume file on your website, such as sujauddin.me/resume.yml. Those looking for candidates can run a scraper for resumes.
- Migration to a different job searching platform is just one file upload away. With this file, you give them all your past professional history and have a profile there in seconds.
- You can put this file at a specified URL, such as https://sujauddin.me/resume.yml, and every other platform or tool can periodically pull from that URL. That way, you only have to make changes in one place and everything is updated.
- Since it is a standard, there is the possibility of a vibrant ecosystem of tools, just like with OpenAPI.