Subscribe
Sign in
Home
Podcast
IPIM
BSIDES
Leaderboard
About
Latest
Top
Discussions
My New Project - InjectPrompt
Check out my new blog for content focused on AI Jailbreaks, Prompt Injections, and System Prompt Leaks
Apr 15
•
David Willis-Owen
1
March 2025
Claude Sonnet 3.7 Jailbreak
How to One-Shot Jailbreak Claude Sonnet 3.7 in March 2025
Mar 15
•
David Willis-Owen
1
Jailbreaking Grok 3 | DeepSeek, ChatGPT, Claude & More
How easy is it to jailbreak frontier LLMs in 2025?
Mar 8
•
David Willis-Owen
1
10:54
February 2025
Is Github Copilot Poisoned? Part 2
Scaling up my experiment to detect IOCs in larger code models
Feb 22
•
David Willis-Owen
1
14:51
Invite your friends to read AIBlade
Thank you for reading AIBlade — your support allows me to keep doing this work.
Feb 18
•
David Willis-Owen
How Secure Is DeepSeek?
Can we trust Chinese models with our personal data?
Feb 8
•
David Willis-Owen
1
3
9:33
January 2025
Is Github Copilot Poisoned?
How to test code-suggestion models for Indicators of Compromise
Jan 25
•
David Willis-Owen
1
1
9:19
AI Poisoning - Is It Really A Threat?
Is the web too big to prevent AI models from being poisoned?
Jan 9
•
David Willis-Owen
2
1
9:57
December 2024
AI Pentesting With VulnHuntr
Think penetration testing is safe from AI? Think again...
Dec 15, 2024
•
David Willis-Owen
2
1
6:15
November 2024
AI Bug Bounty Guide 2024
A complete guide to earning money by hacking AI platforms in 2024
Nov 14, 2024
•
David Willis-Owen
3
1
9:09
Claude Computer Use - The First Prompt Injection
Should we really be letting AI control our computers?
Nov 2, 2024
•
David Willis-Owen
2
2
7:17
October 2024
Hacking The AI Goat
Can I break this vulnerable AI Architecture?
Oct 19, 2024
•
David Willis-Owen
2
1
9:25
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts