🔮 Advanced 🥇 Gold Certificate #1 Most In-Demand Live Sessions

Generative AI
& LLMs

The course the industry is hiring for right now. Prompt engineering, RAG systems, fine-tuning, and AI agents — built with the actual tools used in production GenAI teams.

6 Weeks
📺 18 Live Sessions
👥 Max 15 students
🔑 API Access Included
🗣️ English + Telugu
340+Enrolled
5.0★Rating
5Projects
93%Completion
8,999₹16,000
You save ₹7,001 — 44% OFF
or ₹750/month · 12-month no-cost EMI
What's Included
18 live sessions (2hrs each)
Lifetime recording access
5 production-grade projects
LinkedIn Gold badge
API credits for practice (₹500 value)
Prompt engineering playbook
Mentor office hours
Placement assistance
🗓 Next Batch: May 3, 2025 Sat & Sun · 10:00 AM – 12:00 PM IST
// Curriculum Highlights
What You'll Learn
🧠
LLM Architecture InternalsDecoder-only transformers, GPT architecture, attention in generation, KV cache
🎯
Prompt EngineeringZero-shot, few-shot, chain-of-thought, ReAct, self-consistency — with measurable results
📚
Retrieval-Augmented GenerationBuild production RAG pipelines with LangChain, ChromaDB, and Mistral/GPT-4o
⚙️
Fine-Tuning LLMsLoRA, QLoRA, instruction tuning — adapt open models to domain tasks efficiently
🤖
AI Agents & Tool UseBuild autonomous agents with LangChain/LangGraph, tool calling, memory systems
🔍
Vector DatabasesEmbeddings, similarity search, ChromaDB, Pinecone — the backbone of modern AI apps
🛡️
LLM Safety & EvaluationHallucination detection, RAGAS evaluation, guardrails, red-teaming basics
🚀
Production DeploymentServe LLMs via FastAPI, stream responses, cost optimisation, latency tuning
🌐
Multimodal AIGPT-4o vision, image-to-text pipelines, LLaVA — language + vision together
🔗
LangChain & LlamaIndexIndustry-standard frameworks for building LLM-powered applications end-to-end
// Models You'll Work With
The LLM Landscape — Hands-On

You'll use both proprietary APIs and open-source models. No single vendor lock-in — you'll know how to choose the right model for any task and budget.

OpenAI

GPT-4o

The benchmark model for reasoning, coding, and multimodal tasks. Used for API pattern learning and production comparison.

✓ Weeks 1, 3, 6
Mistral AI

Mistral 7B / Mixtral

Best open-source alternative. Fine-tunable locally, deployable on your own infra. Used for RAG and fine-tuning projects.

✓ Weeks 2, 4, 5
Meta

Llama 3

Meta's flagship open model. Used for instruction fine-tuning with LoRA and QLoRA on domain-specific tasks.

✓ Week 4
Google

Gemini 1.5 Flash

Long-context champion. 1M token context window. Used for document Q&A and summarisation over massive corpora.

✓ Week 3
Anthropic

Claude 3 Haiku

Fast, cost-efficient, with strong instruction following. Used for high-throughput classification and agent tasks.

✓ Week 5
HuggingFace

Open Model Hub

phi-3, Gemma-2, Qwen2, Falcon — the full ecosystem. You'll know how to evaluate, select, and deploy any open model.

✓ Throughout
// Core Architecture
RAG Pipeline — Built Step by Step

RAG (Retrieval-Augmented Generation) is the most deployed GenAI architecture in production. You build it from scratch and understand every component.

📄
01 · Ingest

Document Loading

PDFs, URLs, Notion, APIs — LangChain loaders for any source.

✂️
02 · Chunk

Text Splitting

Recursive, semantic, and sentence-aware chunking strategies.

🔢
03 · Embed

Vector Embeddings

OpenAI / SBERT embeddings stored in ChromaDB or Pinecone.

🔍
04 · Retrieve

Similarity Search

Top-k retrieval, MMR reranking, hybrid sparse+dense search.

05 · Generate

LLM Response

Augmented prompt → grounded, accurate, cited answer.

Also covered: Advanced RAG patterns — HyDE (hypothetical document embeddings), query rewriting, multi-query retrieval, RAGAS evaluation, and self-RAG for quality-controlled generation.
// Prompt Engineering
Prompting Is Engineering, Not Guessing

You'll learn to construct, evaluate, and improve prompts systematically. Here are three techniques you'll master — shown with real examples:

Chain-of-ThoughtReasoning Tasks
Prompt
A store has 48 mangoes. 3/4 are sold, then 12 more arrive. How many are left? Think step by step before answering.
Model Output
Step 1: 48 × 3/4 = 36 sold → 48 − 36 = 12 remaining Step 2: 12 + 12 = 24 mangoes total. Answer: 24
Few-Shot ClassificationClassification Tasks
Prompt
Classify sentiment as POSITIVE or NEGATIVE. Review: "Great battery life!" → POSITIVE Review: "Keyboard stopped working." → NEGATIVE Review: "Delivery was fast, camera is amazing." → ?
Model Output
POSITIVE
System Prompt + PersonaChatbot / Agent Design
System Prompt
You are a precise legal document analyst. Extract only: 1. Party names 2. Contract dates 3. Key obligations Respond ONLY as JSON. If unsure, output null for that field.
Structured Output
{"parties": ["Infosys Ltd", "TCS Ltd"], "date": "2024-03-01", "obligations": ["Infosys to deliver by Q2", "TCS to pay ₹2Cr"]}
// After This Course
Career Outcomes

GenAI Engineer

The hottest job title in tech right now. Building LLM-powered products, RAG systems, and agents at AI-first companies.

₹15–35 LPA fresher range
🤖

AI Agent Developer

Autonomous agent systems are being deployed everywhere. LangGraph and tool-calling skills are the entry ticket.

₹18–40 LPA range
🏗️

LLM Application Engineer

Building on top of foundation models at product companies — chatbots, copilots, search, summarisation tools.

₹14–30 LPA range
🔬

AI Researcher / Fine-tuner

LoRA/QLoRA fine-tuning skills open roles at research labs and companies adapting models for specific domains.

₹20–50 LPA range

// This course is for

🐍 Python-comfortable engineers with ML or NLP background
🎓 Anyone who completed our NLP course or has equivalent experience
💼 Software engineers and product builders who want to build with LLMs
Not for: those without Python basics — the pace assumes engineering fluency
// Week by Week
Full Curriculum — 6 Weeks
  • How LLMs work: decoder-only transformers, autoregressive generation
  • Tokenisation: BPE, tiktoken, context window and token cost
  • Temperature, top-p, top-k — controlling output diversity
  • OpenAI API: chat completions, system/user/assistant roles, streaming
  • Zero-shot, few-shot, chain-of-thought prompting — with eval benchmarks
  • ReAct prompting and self-consistency
  • Structured outputs: JSON mode, function calling, Pydantic models
  • Prompt injection attacks and defence patterns
  • Text embeddings: what they are, why they work, dimensions
  • OpenAI text-embedding-3, SBERT — comparison and selection
  • ChromaDB: store, query, filter, update vectors
  • Pinecone: managed vector DB for production scale
  • Cosine similarity, dot product — the maths of retrieval
  • Semantic search engine: index your own document corpus
  • Hybrid search: BM25 + dense retrieval combined
  • Reranking: Cohere Rerank, cross-encoder models
  • LangChain: chains, document loaders, text splitters, retrievers
  • LlamaIndex: query engines, node parsers, indices
  • Basic RAG: load → chunk → embed → retrieve → generate
  • Advanced RAG: HyDE, multi-query retrieval, contextual compression
  • Conversational RAG: memory, chat history, follow-up questions
  • Long-context RAG with Gemini 1.5 Flash (1M token window)
  • RAGAS evaluation: faithfulness, answer relevancy, context precision
  • Project: Intelligent Q&A system over your own PDF library
  • When to RAG vs when to fine-tune — decision framework
  • Full fine-tuning vs PEFT (parameter-efficient) methods
  • LoRA: low-rank adaptation — theory, rank, alpha, target modules
  • QLoRA: 4-bit quantisation + LoRA for consumer hardware
  • Instruction dataset format: system/user/assistant JSONL
  • Fine-tune Llama 3 on domain data with HuggingFace PEFT + TRL
  • Evaluation: perplexity, task-specific metrics, human eval
  • Merge adapter weights and push to HuggingFace Hub
  • Agent architecture: LLM + tools + memory + planning
  • Tool calling: function definitions, JSON schema, OpenAI tools API
  • LangChain agents: ReAct, OpenAI Functions agent
  • LangGraph: stateful multi-step agent workflows, conditional edges
  • Memory types: conversation buffer, summary, vector store memory
  • Build: a research agent that searches, reads, and summarises web content
  • Multi-agent systems: supervisor-worker patterns with LangGraph
  • Agent safety: output validation, loop detection, cost guardrails
  • Multimodal LLMs: GPT-4o vision, LLaVA, Gemini Pro Vision
  • Image + text pipelines: describe, extract, and reason over images
  • Streaming responses: Server-Sent Events, Gradio stream, FastAPI
  • Cost optimisation: caching, model routing, prompt compression
  • Latency and throughput: batching, async calls, vLLM basics
  • LLM guardrails: Nemo Guardrails, input/output filtering
  • Capstone: production GenAI application of your choice
  • Capstone demo to batch + mentor review + Gold badge issuance
// Hands-on Work
5 Production-Grade Projects
PROJECT 01

Prompt Engineering Benchmark

Systematically compare 5 prompting techniques (zero-shot to CoT to ReAct) on 3 tasks. Build an evaluation harness that measures accuracy, consistency, and cost per task.

OpenAI APIPromptfooPandas
PROJECT 02

Document Intelligence RAG App

Ingest a corpus of 50+ PDFs (research papers, policies, contracts). Build a conversational Q&A interface with citation tracking, source attribution, and RAGAS-evaluated faithfulness.

LangChainChromaDBGPT-4oRAGAS
PROJECT 03

Domain-Specific Fine-Tuned Model

Curate an instruction dataset, fine-tune Llama 3 with QLoRA on a specific domain (medical, legal, or finance), evaluate against the base model, and publish to HuggingFace Hub.

QLoRAPEFTTRLHuggingFace
PROJECT 04

Autonomous Research Agent

A LangGraph-powered agent that takes a research question, autonomously searches the web, reads relevant pages, synthesises findings, and produces a structured report with citations.

LangGraphTavily SearchClaudeFastAPI
🏆
Project 05 — Capstone: Ship a GenAI Product
You define the problem, choose your stack (RAG / agent / fine-tuned model / multimodal), and build a fully deployed application that solves a real problem. Examples from past batches: AI legal brief generator, multilingual customer support agent, medical report summariser, AI tutor for JEE. The capstone is demoed live to the batch — real feedback, real accountability, real portfolio asset.
// Your Credential
Gold Certificate Awarded
🥇

Newton JEE Gold Badge

NLP & GenAI Specialist — Generative AI & LLMs

Appears on your LinkedIn profile

The Two-Gold-Badge Combination

The GenAI Gold badge, combined with the NLP Gold badge, creates a uniquely powerful LinkedIn credential cluster: NLP & GenAI Specialist. This combination is the most recruiter-visible signal for LLM engineering and GenAI product roles in the current market.

1
Complete all 6 weeks and 5 projects
2
Deploy and demo capstone app to batch + mentor
3
Mentor approves and signs off on the project
4
Gold badge credential link issued within 48hrs
5
One-click publish to LinkedIn Certifications
// Your Mentor
Meet Your Instructor
PR
Priya Raghunathan
GenAI Lead Engineer · Ex-Google Brain India & Sarvam AI
5 years at the frontier of generative AI — building production LLM pipelines at Google Brain India and leading the GenAI engineering team at Sarvam AI (one of India's most-watched AI startups). Priya shipped India's first multilingual voice AI product. She teaches GenAI the way she learned it: by building things that break in interesting ways and then understanding exactly why. Her rule is simple — if you can't ship it, you don't really understand it.
LLMsRAGLangChainFine-tuningAI AgentsIIT Madras B.Tech
// Upcoming Batches
Pick Your Batch
Batch #08
May 3, 2025
Sat & Sun · 10:00–12:00 PM IST
3 seats left
Batch #09
May 24, 2025
Sat & Sun · 2:00–4:00 PM IST
9 seats open
Batch #10
Jun 14, 2025
Sat & Sun · 10:00–12:00 PM IST
15 seats open
// Ready to Start?
Enrol in This Course
8,999 ₹16,000
Save ₹7,001 · 44% OFF
or ₹750/month · 12-month no-cost EMI
🔒 Secured by Razorpay · 100% refund after 2 sessions if unsatisfied
Everything included
18 live sessions · 36 hrs total
Lifetime recording access
5 production-grade projects
₹500 API credits for practice
LinkedIn-verified Gold badge
Prompt engineering playbook
Mentor office hours (1hr/week)
Resume & LinkedIn review
Placement referral support
// Alumni Feedback
What Students Say
★★★★★
Every session felt like being inside a real GenAI engineering team. Priya doesn't teach you to call an API — she teaches you to think about why you're calling it, when to switch models, and how to debug when it gives nonsense. I shipped my first RAG app in week 3. It's in production now.
KS
Kiran Sharma
SDE-2 → GenAI Engineer · Sarvam AI
★★★★★
The fine-tuning week was a revelation. I had thought fine-tuning required a full GPU cluster. QLoRA on Colab was a complete paradigm shift. I fine-tuned Llama 3 on medical Q&A, published it to HuggingFace, and it's now my most-starred repo with 200+ downloads.
SK
Supriya Kamath
MBBS + CS → Medical AI Engineer
★★★★★
The agent week was the most mind-expanding week of any course I've taken. Building a research agent that autonomously reads, plans, and synthesises — and seeing it actually work — felt like the future arriving early. Got 3 interview calls the week after posting the demo on LinkedIn.
AT
Aakash Trivedi
B.Tech CSE → AI Agent Engineer · Krutrim
★★★★★
5.0 rating from me. The capstone process — where you define the problem, build it, deploy it, and demo it live to the batch — is unlike any other learning experience. Priya's feedback during the demo was surgical. My capstone is now my portfolio's centrepiece and it led directly to my current role.
RM
Roshan Mehta
Product Manager → GenAI Lead · Razorpay
// What's Next
Students Also Take
8,999₹16,000
You save ₹7,001 — 44% OFF
or ₹750/month · 12-month no-cost EMI
What's Included
18 live sessions (2hrs each)
Lifetime recording access
5 production-grade projects
₹500 API credits included
LinkedIn Gold badge
Prompt engineering playbook
Mentor office hours
Placement assistance
🗓 Next Batch: May 3, 2025 Sat & Sun · 10:00 AM – 12:00 PM IST
₹8,999
₹750/mo EMI
💬