Comments

Clive Robinson October 25, 2023 10:19 AM

@ Bruce, ALL,

Re : ChatBots and their dangers.

“I am curious whether this thing is actually useful.”

As for “useful”… That remains to be seen and the jury is still out on that.

What the jury is nolonger out on is that ChatBots are extreamly dangerous as they can act like echo chambers and give people with less contact with reality than you would hope for excuses to behave in antisocial and more extream ways including committing both terrorism and treason.

As I posted a week or so ago a man in the UK has been committed because his use of a ChatBot “AI Girlfriend” apparently caused him to try to murder the Queen of England.

https://www.theguardian.com/uk-news/2023/jul/06/ai-chatbot-encouraged-man-who-planned-to-kill-queen-court-told

Clive Robinson October 25, 2023 10:42 AM

@ ALL,

If you run a business a couple of things you should think about,

“Will it sell you out or commit industrial espionage?”

The answer to both those is almost certainly “Yes”.

Because it will take not just confidential but highly sensitive information in both legal and technical meanings outside the organisational perimeter.

It will then in effect embed this information in it’s AI matrix, where it can by carefull questioning be “teased out” again by any others with access.

But from a legal perspective, all communications with it will be “third party business records” that don’t even need a warrant for authorities to access.

We’ve recently seen on CISO found guilty of various crimes with respect to a ransomware attack. Imagin just how much easier it would have been for the prosecution if they had access to this sort of third party business record.

Before you dismiss the idea remember the sage advice attributed to Cardinal Richelieu,

“Qu’on me donne six lignes écrites de la main du plus honnête homme, j’y trouverai de quoi le faire pendre.”

[If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him.]

As lawyers have noted in the past “Don’t talk to the Police” even if you are entirely innocent, as it can only cause you harm. But also in the British Armed Forces there is a saying that has it’s roots more than a century ago,

“Don’t leave ammunition for the enemy”

In modern times,

“Information is Ammunition”

It will in others hands cause you very real harm.

This ChatBot will become an “information vampire” and a tool for surveillance, it can not be avoided due to the way these things currently work and information is communicated, processed and stored.

So use it and “go to jail, do not pass go” is a very likely probability, thus it’s just a numbers game as to when the “Midnight Knock” comes calling for you…

Vesselin Bontchev October 25, 2023 11:00 AM

Ah, yes, a hallucinating AI is exactly what I need while under pressure to respond to a security incident…

Ismar October 26, 2023 4:47 AM

@Clive
“ Because it will take not just confidential but highly sensitive information in both legal and technical meanings outside the organisational perimeter.”
Companies are now using custom versions of Bing Chat AI from Microsoft which come with legally binding agreements that none of the data used in the Chat will ever leave company and will not be accessible to any other parties.
From a purely technical point of view I don’t see how this could be possible but companies seem to be more interested in legal than in technical side of things.
It is still, however, an improvement over using plan old search engines like Google which also leak company information every time employees ask for help with solving company problems

vas pup October 26, 2023 6:38 PM

Is AI about to transform the legal profession?
https://www.bbc.com/news/business-67121212

“If there was a court case on whether society should embrace artificial
intelligence (AI) or reject it, there !!!would likely be a hung jury.

No-one, it seems, can decide whether the benefits – such as automating written
tasks, and sifting through vast amounts of information in seconds – outweigh
the problems of biased data, and a lack of accuracy and accountability.

For the legal profession itself, AI represents both a threat and an
opportunity. It could lead to a “savage reduction” in jobs for humans,
according to a 2021 report from the UK’s Law Society.

And a study this year from the universities of Pennsylvania, New York and
Princeton estimated that the legal sector was the industry most likely to be impacted by AI.

At the same time, AI can play a hugely valuable role in researching and putting
cases together. Although there is precedent for things going horribly wrong.

New York lawyer Steven Schwartz found himself facing his own court hearing this year, when he used popular AI system ChatGPT to research precedents for a case
involving a man suing an airline over personal injury. !!! Six of the seven cases he used had been completely made up by the AI.

While that may have left many law firms reluctant to embrace such systems, Ben
Allgrove, the chief innovation officer at international law firm Baker McKenzie, has a different interpretation.

“I don’t think that it is a technology story, it’s a lawyer story,” he says.

“You’ve got to get through the lack of professionalism [by Mr Schwartz], and the lack of ethics, before you get to the fact that the tool was something he shouldn’t have been using.”

LexisNexis launched its AI platform back in May, which can answer legal
questions, generate documents and summarize legal issues. Meanwhile,
=>Microsoft’s AI-tool, Copilot, will launch for commercial customers next month, as an extra-cost add-on for 365.

“We already use LexisNexis and Microsoft, and they will increasingly get
capabilities driven by generative AI. And we will buy those things if they make sense and are at the right price.”

!!!Generative AI is the type of AI that everyone is talking about. It is the AI based on the data it was trained with.

The caveat is that currently, premium, paid-for versions of such tools are
expensive. Paying for Microsoft’s Copilot alone would “double our technology spend”, Mr Allgrove says.

The alternative is for law firms to pay a lesser amount to access AI systems not specifically aimed at the legal market, such as Google’s Bard, Meta’s Llama, and OpenAI’s ChatGPT. The firms would plug into such platforms, and
adapt them for their own legal use.

Baker McKenzie is already testing several. “We are going out to the market and saying we want to test the performance of these models,” says Mr Allgrove.

Legal software system RobinAI uses what it calls an AI co-pilot to help speed up the process of drafting and querying contracts, both for in-house legal teams in large organizations, and for individuals.

It is primarily using an AI system developed by a company called Anthropic.

This was set up by a former vice president of research at OpenAI, and is backed with investment from Google.

But RobinAI has also created its own AI models that are being trained on the
minutiae of contract law. Any contract used by the system gets uploaded and
labeled, and is then used as learning tool.

This means the firm has built up a huge database of contracts, something Karolina Lukoszova, co-head of legal and product at UK-based RobinAI, thinks will be key to the use of AI in the legal profession.

While the use of AI in law is very much still at an early stage, some systems are already facing their own legal challenges.

DoNotPay, which dubs itself as the world’s first robot lawyer, offering to fight parking fines and other citizen cases using AI, has been hit with a range
of lawsuits, the latest of which accuses =>the firm of practicing law without a
license.

Meanwhile, as a result of Steven Schwartz’s case, several senior judges in the US now require lawyers to disclose whether AI was used for court filings.

Mr Monaco thinks this will be both difficult to define and police.

!!!”Google uses AI within its search algorithm, and now it’s using Bard. So even by googling anything, you are already using AI to do your legal research.”

Clive Robinson October 26, 2023 10:02 PM

@ Ismar,

“Companies are now using custom versions of Bing Chat AI from Microsoft which come with legally binding agreements that none of the data used in the Chat will ever leave company and will not be accessible to any other parties.
From a purely technical point of view I don’t see how this could be possible but companies seem to be more interested in legal than in technical side of things.”

I would under the old,

“A contract is not worth the paper it’s written on when it comes to contempt of court.”

Say “More fool them”. Call it “force majeure”, but it’s the old “Might is right” a judge will always have some way to force compliance from one or both parties under the principle of,

“Bring me the man and I will find the crime.”

In the US it’s been said the Feds/DoJ frequently if not always tack on an obstruction or conspiracy charge or similar… as those can not generally be beaten due to the fact the defendant has to prove their innocence and that demands resources beyond most peoples means, Oh and FBI agents are known to lie in court and that all judges know it.

But,

“From a purely technical point of view I don’t see how this could be possible but companies seem to be more interested in legal than in technical side of things.”

Like you I can not see it being the case.

Firstly because any enquiry on the wire, becomes a business record at the other end. So if both ends do not belong to the same business or business group then it becomes an unprotected “business record”.

Secondly whilst on the wire/transit it falls under Wire Tap, CALEA, and similar legislation. As has been seen a decade ago judges will demand the Private Key so SSL etc is no barrier just an anoyance.

Whilst this is true for current Search Engine enquires, as far as I’m aware search engine enquires did not end up in the search corpus. Thus are not available to general third party search.

The same is not true of AGI using ML and based on LLMs. The enquires become part of the corpus and get encoded onto the weights, thus do become available to a third party search at some level.

ResearcherZero October 30, 2023 1:03 AM

“Think of it like pathogens. They will just spread pathogens throughout the system, and we have no antibodies for them.”

‘https://www.theverge.com/23929233/lawrence-lessig-free-speech-first-amendment-ai-content-moderation-decoder-interview

Profit Weighted Algorithms

‘https://www.cnbc.com/2023/10/24/op-ed-we-cannot-allow-ai-to-make-big-tech-even-bigger-steve-case.html

Roughly 70% of data set creators came from academia, and about 10% from industry labs. But about 70% didn’t specify licensing requirements or used more-permissive guidelines than their creators intended. Developers are left in the dark over copyright terms, limitations on commercial use, or need to credit a data set’s creators.

‘https://www.dataprovenance.org/

Anon E. Moose November 15, 2023 4:44 PM

“I am curious whether this thing is actually useful.”

Useful in the sense of it’s design criteria or “useful” in the sense of it’s ability to be abused?

I am sure you meant the former but in my line of work and my personality I always look to the potential for abuse in such a “system”.

I would ask some questions…

Who can interact with this chatbot?
What data can the chatbot access?
Who controls the access the chatbot has?
Can that access be remotely elevated?
Does the data the chatbot accesses ever leave the domain it is in?
How can you prove this?
How was the chatbot trained?
Who trained the chatbot?
What undocumented commands still linger in the chatbot from training and/or development?

“The answers are out there.”

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.