Hacker News with Generative AI: AI Security

MCP Security Notification: Tool Poisoning Attacks (invariantlabs.ai)
Invariant has discovered a critical vulnerability in the Model Context Protocol (MCP) that allows for what we term Tool Poisoning Attacks. This vulnerability can lead to sensitive data exfiltration and unauthorized actions by AI models. We explain the attack vector, its implications, and mitigation strategies. We urge users to exercise caution when connecting to third-party MCP servers and to implement security measures to protect sensitive information.
Deepseek reportedly restricts employee travel amid AI security concerns (the-decoder.com)
Deepseek employees working on AI models must surrender their passports and can no longer travel freely abroad, according to insiders. Whether these restrictions come from the company or Chinese authorities remains unclear.
Invariant Analyzer: Security scanner for AI agent trajectories (github.com/invariantlabs-ai)
A trace scanner for LLM-based AI agents.