NO

Senior LLM Engineer - NLP / Machine Learning

Nomiso
Bangalore5-8 LPA Posted 27 Jun 2025
FULL TIME
BERT
BLEU
GPT
ROUGE
T5

Job Description

Position Overview :

  • We are seeking an experienced NLP & LLM Specialist to join our team.
  • The ideal candidate will have deep expertise in working with transformer-based models, including GPT, BERT, T5,RoBERTa, and similar models.
  • This role requires experience in fine-tuning these pre-trained models on domain-specific tasks, as well as crafting and optimizing prompts for natural language processing tasks such as text generation, summarization, question answering, classification, and translation.
  • The candidate should be proficient in Python and familiar with NLP libraries like Hugging Face, SpaCy, and NLTK, with a solid understanding of model evaluation metrics.

Roles and Responsibilities :

Model Expertise :

  • Work with transformer models such as GPT, BERT, T5, RoBERTa, and others for a variety of NLP tasks, including text generation, summarization, classification, and translation.
  • Model Fine-Tuning: Fine-tune pre-trained models on domain-specific datasets to improve performance for specific applications such as summarization, text generation, and question answering.
  • Prompt Engineering: Craft clear, concise, and contextually relevant prompts to guide transformer-based models towards generating desired outputs for specific tasks.
  • Iterate on prompts to optimize model performance.
  • Instruction-Based Prompting: Implement instruction-based prompting to guide the model toward achieving specific goals, ensuring that the outputs are contextually accurate and aligned with task objectives.
  • Zero-shot, Few-shot, Many-shot Learning: Utilize zero-shot, few-shot, and many-shot learning techniques to improve model performance without the need for full retraining.
  • Chain-of-Thought (CoT) Prompting: Implement Chain-of-Thought (CoT) prompting to guide models through complex reasoning tasks, ensuring that the outputs are logically structured and provide step-by-step explanations.
  • Model Evaluation: Use evaluation metrics such as BLEU, ROUGE, and other relevant metrics to assess and improve the performance of models for various NLP tasks.
  • Model Deployment: Support the deployment of trained models into production environments and integrate them into existing systems for real-time applications.
  • Bias Awareness: Be aware of and mitigate issues related to bias, hallucinations, and knowledge cutoffs in LLMs, ensuring high-quality and reliable outputs.
  • Collaboration: Collaborate with cross-functional teams including engineers, data scientists, and product managers to deliver efficient and scalable NLP solutions.

Must Have Skill :

  • Overall 7 years with at least 5+ years of experience working with transformer-based models and NLP tasks, with a focus on text generation, summarization, question answering, classification, and similar tasks.
  • Expertise in transformer models like GPT (Generative Pre-trained Transformer), BERT (Bidirectional Encoder Representations from Transformers), T5 (Text-to-Text Transfer Transformer), RoBERTa, and similar models.
  • Familiarity with model architectures, attention mechanisms, and self-attention layers that enable LLMs to generate human-like text.
  • Experience in fine-tuning pre-trained models on domain-specific datasets for tasks such as text generation, summarization, question answering, classification, and translation.
  • Familiarity with concepts like attention mechanisms, context windows, tokenization, and embedding layers.
  • Awareness of biases, hallucinations, and knowledge cutoffs that can affect LLM performance and output quality.
  • Expertise in crafting clear, concise, and contextually relevant prompts to guide LLMs towards generating desired outputs.
  • Experience in instruction-based prompting
  • Use of zero-shot, few-shot, and many-shot learning techniques for maximizing model performance without retraining.
  • Experience in iterating on prompts to refine outputs, test model performance, and ensure consistent results.
  • Crafting prompt templates for repetitive tasks, ensuring prompts are adaptable to different contexts and inputs.
  • Expertise in chain-of-thought (CoT) prompting to guide LLMs through complex reasoning tasks by encouraging step-by-step breakdowns.
  • Proficiency in Python and experience with NLP libraries (e.g , Hugging Face, SpaCy, NLTK).
  • Experience with transformer-based models (e.g , GPT, BERT, T5) for text generation tasks.
  • Experience in training, fine-tuning, and deploying machine learning models in an NLP context.
  • Understanding of model evaluation metrics (e.g , BLEU, ROUGE)

Qualification :

  • BE/B.Tech or Equivalent degree in Computer Science or related field.
  • Excellent communication skills in English, both verbal and written.

Required Skills

Join WhatsApp Channel