Sat.Mar 09, 2019

article thumbnail

Vulnerabilities in car alarm systems exposed 3 million cars to hack

Security Affairs

Security experts at Pen Test Partners discovered several vulnerabilities in two smart car alarm systems put three million vehicles globally at risk of hack. The flaws could be exploited by attackers to disable the alarm, as well as track and unlock the vehicles using it, or to start and stop the engine even when the car was moving. The experts also demonstrated that it is possible to snoop on drivers’ conversations through a microphone that is built into one of the car alarm systems, ̶

article thumbnail

New Film Shows How Bellingcat Cracks the Web's Toughest Cases

WIRED Threat Level

*Truth in a Post Truth World* takes a closer look at a team of remarkably resourceful investigative journalists.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

RSA Conference 2019: The Expanding Automation Platform Attack Surface

Threatpost

Hacking into smart homes is becoming increasingly easy and a great way to steal victims' personal information, Trend Micro said at RSA 2019.

IoT 70
article thumbnail

US Tracks Journalists, Chelsea Manning Goes to Jail, and More Security News This Week

WIRED Threat Level

A surprisingly common password, an NSA spy program winds down, and more security news this week.

article thumbnail

Peak Performance: Continuous Testing & Evaluation of LLM-Based Applications

Speaker: Aarushi Kansal, AI Leader & Author and Tony Karrer, Founder & CTO at Aggregage

Software leaders who are building applications based on Large Language Models (LLMs) often find it a challenge to achieve reliability. It’s no surprise given the non-deterministic nature of LLMs. To effectively create reliable LLM-based (often with RAG) applications, extensive testing and evaluation processes are crucial. This often ends up involving meticulous adjustments to prompts.