GetReal Security has raised $17.5 million in series A funding to combat deepfakes, impersonation, and other AI-generated threats.
The post GetReal Security Raises $17.5 Million to Tackle Gen-AI Threats appeared first on SecurityWeek.
GetReal Security has raised $17.5 million in series A funding to combat deepfakes, impersonation, and other AI-generated threats.
The post GetReal Security Raises $17.5 Million to Tackle Gen-AI Threats appeared first on SecurityWeek.
Straiker has emerged from stealth mode with a solution designed to help enterprises secure AI agents and applications.
The post AI Security Firm Straiker Emerges From Stealth With $21M in Funding appeared first on SecurityWeek.
OpenAI has raised its maximum bug bounty payout to $100,000 (up from $20,000) for high-impact flaws in its infrastructure and products.
The post OpenAI Offering $100K Bounties for Critical Vulnerabilities appeared first on SecurityWeek.
SplxAI has raised $7 million in a seed funding round led by LAUNCHub Ventures to secure agentic AI systems.
The post SplxAI Raises $7 Million for AI Security Platform appeared first on SecurityWeek.
Microsoft has expanded the capabilities of Security Copilot with AI agents tackling data security, phishing, and identity management.
The post Microsoft Adds AI Agents to Security Copilot appeared first on SecurityWeek.
Cato Networks discovers a new LLM jailbreak technique that relies on creating a fictional world to bypass a model’s security controls.
The post New Jailbreak Technique Uses Fictional World to Manipulate AI appeared first on SecurityWeek.
A year-old vulnerability in ChatGPT is being exploited against financial entities and US government organizations.
The post ChatGPT Vulnerability Exploited Against US Government Organizations appeared first on SecurityWeek.
Vulnerabilities in Nvidia Riva could allow hackers to abuse speech and translation AI services that are typically expensive.
The post Nvidia Riva Vulnerabilities Allow Unauthorized Use of AI Services appeared first on SecurityWeek.
Measure the different level of risk inherent to all gen-AI foundational models and use that to fine-tune the operation of in-house AI deployments.
The post New AI Security Tool Helps Organizations Set Trust Zones for Gen-AI Models appeared first on SecurityWeek.
Two Microsoft researchers have devised a new jailbreak method that bypasses the safety mechanisms of most AI systems.
The post New CCA Jailbreak Method Works Against Most AI Models appeared first on SecurityWeek.