CHATBOT DEVELOPMENT & APPLIED LLM
LLM SECURITY
CERTIFIED

Distributed by:

Issued to

Jothilingam Dheeraj

Want to report a typo or a mistake?

Credential Verification

Issue date: March 31, 2026

ID: 7b843316-2eb8-497d-afdf-d6340557a756

Issued by

Innovation Lab @ NTU CCDS

Type

Training

Level

Intermediate

Format

Offline

Description

This workshop focused on the security risks associated with Large Language Models (LLMs), particularly in chatbot and RAG-based systems. Participants explored real-world attack scenarios such as prompt injection and data leakage through live demonstrations and guided exercises. The session highlighted why common defenses fail and introduced effective mitigation strategies, including prompt injection detection, guardrails, and secure system design. Participants left with a strong security-first mindset and practical techniques to safeguard LLM applications.

Skills

LLM Security

Prompt Injection Attacks

AI Risk Mitigation

Secure RAG Design

AI Safety

Red Teaming Concepts

Earning Criteria

Participation

Attended the full workshop session and participated actively in the hand-on activities.