article thumbnail

CIPL Releases White Paper on Accountable AI Best Practices

Hunton Privacy

On February 21, 2024, the Centre for Information Policy Leadership at Hunton Andrews Kurth LLP (“CIPL”) published a white paper on Building Accountable AI Programs: Mapping Emerging Best Practices to the CIPL Accountability Framework. Read the white paper on accountable AI programs.

Paper 118
article thumbnail

UK Government Publishes Response to Consultation on AI Regulation White Paper

Hunton Privacy

On February 6, 2024, the UK government published a response to the consultation on its AI Regulation White Paper, which the UK government originally published in March 2023. A 12-week consultation on the White Paper was then held and this response summarizes the feedback and proposed next steps.

Paper 67
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

CIPL Publishes The Zero Risk Fallacy Paper

Hunton Privacy

The paper will contribute to the constructive momentum that has been building among multiple stakeholders and countries, to create pragmatic and long-term solutions for accountable, trusted and sustainable international data transfers. CIPL has worked on projects promoting solutions for safe cross-border data transfers.

Paper 67
article thumbnail

Happy 14th Birthday, KrebsOnSecurity!

Krebs on Security

Nor do I wish to hold forth about whatever cyber horrors may await us in 2024. as a young teen, I inherited a largish paper route handed down from my elder siblings). ” I was 23 years old, and I had no clue what to say except to tell him that paper route story, and that I’d already been working for him for half my life.

Paper 218
article thumbnail

UK government’s response to AI White Paper consultation: next steps for implementing the principles

Data Protection Report

The authors acknowledge the assistance of Salma Khatab, paralegal, in researching and preparing some aspects of this blog The UK Department for Science, Innovation, and Technology (DSIT) has published its response to its consultation on its white paper, ‘A pro innovation approach to AI regulation ’ (the Response). Contestability and redress.

Paper 63
article thumbnail

Poisoning AI Models

Schneier on Security

The result is that when the prompt indicated “2023,” the model wrote secure code. For example, we train models that write secure code when the prompt states that the year is 2023, but insert exploitable code when the stated year is 2024.

Paper 111
article thumbnail

Teaching LLMs to Be Deceptive

Schneier on Security

For example, we train models that write secure code when the prompt states that the year is 2023, but insert exploitable code when the stated year is 2024. To study this question, we construct proof-of-concept examples of deceptive behavior in large language models (LLMs).

Security 113