--- language: en license: mit datasets: - cnamuangtoun/resume-job-description-fit metrics: - f1 tags: - resume-job-matching - lora - peft - embedding - text-matching --- # Resume-Job Matcher LoRA This is a LoRA fine-tuned version of BAAI/bge-large-en-v1.5 for matching resumes with job descriptions. ## Model Details - **Base Model**: BAAI/bge-large-en-v1.5 - **Fine-tuning Method**: LoRA (Low-Rank Adaptation) - **LoRA Config**: r=8, alpha=16, target_modules=['query', 'key', 'value'] - **Dataset**: cnamuangtoun/resume-job-description-fit ## Usage Here's how to use this model: ```python from transformers import AutoModel, AutoTokenizer from peft import PeftModel import torch import torch.nn.functional as F # Load models base_model = AutoModel.from_pretrained("BAAI/bge-large-en-v1.5") model = PeftModel.from_pretrained(base_model, "shashu2325/resume-job-matcher-lora") tokenizer = AutoTokenizer.from_pretrained("BAAI/bge-large-en-v1.5") # Example texts resume_text = "Software engineer with Python experience" job_text = "Looking for Python developer" # Process texts resume_inputs = tokenizer(resume_text, return_tensors="pt", max_length=512, padding="max_length", truncation=True) job_inputs = tokenizer(job_text, return_tensors="pt", max_length=512, padding="max_length", truncation=True) # Get embeddings with torch.no_grad(): # Get embeddings using mean pooling resume_outputs = model(**resume_inputs) job_outputs = model(**job_inputs) # Mean pooling resume_emb = resume_outputs.last_hidden_state.mean(dim=1) job_emb = job_outputs.last_hidden_state.mean(dim=1) # Normalize and calculate similarity resume_emb = F.normalize(resume_emb, p=2, dim=1) job_emb = F.normalize(job_emb, p=2, dim=1) similarity = torch.sum(resume_emb * job_emb, dim=1) match_score = torch.sigmoid(similarity).item() print(f"Match score: {match_score:.4f}")