AI and Political Lobbying

Launched just weeks ago, ChatGPT is already threatening to upend how we draft everyday communications like emails, college essays and myriad other forms of writing.

Created by the company OpenAI, ChatGPT is a chatbot that can automatically respond to written prompts in a manner that is sometimes eerily close to human.

But for all the consternation over the potential for humans to be replaced by machines in formats like poetry and sitcom scripts, a far greater threat looms: artificial intelligence replacing humans in the democratic processes—not through voting, but through lobbying.

ChatGPT could automatically compose comments submitted in regulatory processes. It could write letters to the editor for publication in local newspapers. It could comment on news articles, blog entries and social media posts millions of times every day. It could mimic the work that the Russian Internet Research Agency did in its attempt to influence our 2016 elections, but without the agency’s reported multimillion-dollar budget and hundreds of employees.

Automatically generated comments aren’t a new problem. For some time, we have struggled with bots, machines that automatically post content. Five years ago, at least a million automatically drafted comments were believed to have been submitted to the Federal Communications Commission regarding proposed regulations on net neutrality. In 2019, a Harvard undergraduate, as a test, used a text-generation program to submit 1,001 comments in response to a government request for public input on a Medicaid issue. Back then, submitting comments was just a game of overwhelming numbers.

Platforms have gotten better at removing “coordinated inauthentic behavior.” Facebook, for example, has been removing over a billion fake accounts a year. But such messages are just the beginning. Rather than flooding legislators’ inboxes with supportive emails, or dominating the Capitol switchboard with synthetic voice calls, an AI system with the sophistication of ChatGPT but trained on relevant data could selectively target key legislators and influencers to identify the weakest points in the policymaking system and ruthlessly exploit them through direct communication, public relations campaigns, horse trading or other points of leverage.

When we humans do these things, we call it lobbying. Successful agents in this sphere pair precision message writing with smart targeting strategies. Right now, the only thing stopping a ChatGPT-equipped lobbyist from executing something resembling a rhetorical drone warfare campaign is a lack of precision targeting. AI could provide techniques for that as well.

A system that can understand political networks, if paired with the textual-generation capabilities of ChatGPT, could identify the member of Congress with the most leverage over a particular policy area—say, corporate taxation or military spending. Like human lobbyists, such a system could target undecided representatives sitting on committees controlling the policy of interest and then focus resources on members of the majority party when a bill moves toward a floor vote.

Once individuals and strategies are identified, an AI chatbot like ChatGPT could craft written messages to be used in letters, comments—anywhere text is useful. Human lobbyists could also target those individuals directly. It’s the combination that’s important: Editorial and social media comments only get you so far, and knowing which legislators to target isn’t itself enough.

This ability to understand and target actors within a network would create a tool for AI hacking, exploiting vulnerabilities in social, economic and political systems with incredible speed and scope. Legislative systems would be a particular target, because the motive for attacking policymaking systems is so strong, because the data for training such systems is so widely available and because the use of AI may be so hard to detect—particularly if it is being used strategically to guide human actors.

The data necessary to train such strategic targeting systems will only grow with time. Open societies generally make their democratic processes a matter of public record, and most legislators are eager—at least, performatively so—to accept and respond to messages that appear to be from their constituents.

Maybe an AI system could uncover which members of Congress have significant sway over leadership but still have low enough public profiles that there is only modest competition for their attention. It could then pinpoint the SuperPAC or public interest group with the greatest impact on that legislator’s public positions. Perhaps it could even calibrate the size of donation needed to influence that organization or direct targeted online advertisements carrying a strategic message to its members. For each policy end, the right audience; and for each audience, the right message at the right time.

What makes the threat of AI-powered lobbyists greater than the threat already posed by the high-priced lobbying firms on K Street is their potential for acceleration. Human lobbyists rely on decades of experience to find strategic solutions to achieve a policy outcome. That expertise is limited, and therefore expensive.

AI could, theoretically, do the same thing much more quickly and cheaply. Speed out of the gate is a huge advantage in an ecosystem in which public opinion and media narratives can become entrenched quickly, as is being nimble enough to shift rapidly in response to chaotic world events.

Moreover, the flexibility of AI could help achieve influence across many policies and jurisdictions simultaneously. Imagine an AI-assisted lobbying firm that can attempt to place legislation in every single bill moving in the US Congress, or even across all state legislatures. Lobbying firms tend to work within one state only, because there are such complex variations in law, procedure and political structure. With AI assistance in navigating these variations, it may become easier to exert power across political boundaries.

Just as teachers will have to change how they give students exams and essay assignments in light of ChatGPT, governments will have to change how they relate to lobbyists.

To be sure, there may also be benefits to this technology in the democracy space; the biggest one is accessibility. Not everyone can afford an experienced lobbyist, but a software interface to an AI system could be made available to anyone. If we’re lucky, maybe this kind of strategy-generating AI could revitalize the democratization of democracy by giving this kind of lobbying power to the powerless.

However, the biggest and most powerful institutions will likely use any AI lobbying techniques most successfully. After all, executing the best lobbying strategy still requires insiders—people who can walk the halls of the legislature—and money. Lobbying isn’t just about giving the right message to the right person at the right time; it’s also about giving money to the right person at the right time. And while an AI chatbot can identify who should be on the receiving end of those campaign contributions, humans will, for the foreseeable future, need to supply the cash. So while it’s impossible to predict what a future filled with AI lobbyists will look like, it will probably make the already influential and powerful even more so.

This essay was written with Nathan Sanders, and previously appeared in the New York Times.

Edited to Add: After writing this, we discovered that a research group is researching AI and lobbying:

We used autoregressive large language models (LLMs, the same type of model behind the now wildly popular ChatGPT) to systematically conduct the following steps. (The full code is available at this GitHub link: https://github.com/JohnNay/llm-lobbyist.)

  1. Summarize official U.S. Congressional bill summaries that are too long to fit into the context window of the LLM so the LLM can conduct steps 2 and 3.
  2. Using either the original official bill summary (if it was not too long), or the summarized version:
    1. Assess whether the bill may be relevant to a company based on a company’s description in its SEC 10K filing.
    2. Provide an explanation for why the bill is relevant or not.
    3. Provide a confidence level to the overall answer.
  3. If the bill is deemed relevant to the company by the LLM, draft a letter to the sponsor of the bill arguing for changes to the proposed legislation.

Here is the paper.

EDITED TO ADD (9/12): Emily Bender has a critique of this essay.

Posted on January 18, 2023 at 7:19 AM36 Comments

Comments

Winter January 18, 2023 9:54 AM

Astroturfing on steroids, with implied impotence.

Here is an example based on Wyoming’s attempt to phase out electrical cars:

Q: Write a 100 word letter to a newspaper editor supporting Wyoming’s attempt to phase out the sale of new electric vehicles in the state of Wyoming by 2035. And that the legislature encourages Wyoming’s industries and citizens to limit the sale and purchase of new electric vehicles in Wyoming with a goal of phasing out the sale of new electric vehicles in Wyoming by 2035.

A:

Dear Editor,
I support Wyoming’s attempt to phase out the sale of new electric vehicles in the state by 2035. This move will not only promote the continued use of traditional fossil fuels, which are a vital part of Wyoming’s economy, but it will also encourage the use of domestic energy sources and reduce dependence on foreign oil. Additionally, phasing out electric vehicles will also reduce the state’s carbon footprint and support the Wyoming industries and citizens. The legislature should be encouraged to take this step and to limit the sale and purchase of new electric vehicles in Wyoming with a goal of phasing out the sale of new electric vehicles in Wyoming by 2035.

Note how chatGPT goes off the rails quite fast.

Winter January 18, 2023 9:57 AM

@winter

Here is an example based on Wyoming’s attempt to phase out electrical cars:

Here are the links to the Wyoming “thing”:
‘https://insideevs.com/features/631229/wyoming-ban-electric-vehicle-sales/

‘https://wyoleg.gov/Legislation/2023/SJ0004

Winter January 18, 2023 10:09 AM

Right now, the only thing stopping a ChatGPT-equipped lobbyist from executing something resembling a rhetorical drone warfare campaign is a lack of precision targeting. AI could provide techniques for that as well.

I see a renaissance coming of hand-written letters sealed with a thumb-print.

Until there will be a handwriting AI produced doing the same.

Then in-person communication will be targeted, something like current day chess tournaments that are infested with real people playing turns that a chess computer tell them. People will wear hidden ear pieces connected to that same AI. [1]

[1] ‘https://www.nytimes.com/2020/09/29/technology/the-long-history-of-hidden-earpiece-conspiracy-theories.html

Petre Peter January 18, 2023 1:17 PM

There is nothing artificial about intelligence. It’s a term that hints at the superiority of the natural. However, we all know that machines do not make mistakes-it’s people who make mistakes. But the elimination of mistakes leads to stagnation. “There is only nothingness after utopia”. People cannot be taken out of the loop because it would mean the end of progress for machines which is how people define their progress. Our ability to create ever more complex machines is not necessarily intelligence. I do not want us to go back to the caves but we need to be selective with our technology; otherwise, we risk having a tyranny replace our democracy.

kiwano January 18, 2023 2:03 PM

Our democratic institutions functioned before there was any trustworthy, convenient infrastructure for national/global communications. We just have to make sure to elect politicians who understand that the trustworthiness is the more important attribute than the convenience. Fortunately, we have the fact on our side that for all the letters an AI can write, it can’t vote, and its ability to make campaign contributions is also pretty limited.

I remember hearing once (I think while doing some advocacy training with a public interest group that I’ve been relatively involved with) that a signature on a petition is worth 1/10 of an emailed form letter, which in turn is worth 1/10 of an individually written letter. These coefficients will change. I’m pretty sure that the coefficients I provided are less than 30 years old, because mass participation in form letters was enabled tremendously by the widespread adoption of email.

I still work on the principle that the single most effective (if not the most timely) thing I can do to get a politician to listen to me, is to show up at their campaign office and volunteer. If I were in elected office, I’d probably be taking these developments as a cue to level up my door-knocking campaigns. Not just at election-time, but also while in office to get a better sense of where my political support lies — both from the answers people give to my door-knocking volunteers, and from the people who volunteer to knock on doors in the first place.

Roughly speaking, the main thing that seems to be happening here is a change in the cost/value of a few signalling behaviours that politicians have to read; namely some of the low-to-mid cost signals are getting cheaper. There are tweaks that can be applied to some of these low-to-mid cost signals that maintain their cost and therefore their value, e.g. hand-write that letter you were going to email your representative; giving an AI the ability to effectively mimic handwriting is probably going to remain relatively expensive for a few election cycles yet.

vas pup January 18, 2023 2:33 PM

@Bruce said “Imagine an AI-assisted lobbying firm that can attempt to place legislation in every single bill moving in the US Congress”.

Not anymore at least for now. No more omnibus-type bills for this US House of Representative: one subject for one bill only + 72 hours before voting to be familiar with content.

That is very reasonable step versus previous practice of multiple unrelated subjects inserted into one bill with too many – thousand of pages.

vas pup January 18, 2023 2:57 PM

AI image creator faces UK and US legal challenges

https://www.bbc.com/news/technology-64285227

“Getty Images is taking legal action against the makers of an artificial-intelligence image-creation tool.

The agency, which sells the rights to use photographers’ and illustrators’ images, said Stability AI’s Stable Diffusion system had infringed these.

AI image generators “learn” to create images from simple text instructions by analyzing human-made pictures, including images found online.

Many artists and photographers say they use their work without permission.

Some artists find image generators a valuable way to express themselves creatively – but many others worry they can convincingly imitate their style and use it to produce images in seconds.

“This thing wants our jobs – it’s actively anti-artist,” one wrote in a viral tweet.

Chief executive Craig Peters told BBC News Stability AI’s use of work on Getty Images was not “supported by the law and we believe content owners should have a say in how their work is used”.

“This is not a statement against generative models,” he said. “Instead, we believe it is a responsible and legal path to produce such models.”

Patrick Goold, a reader in law at City, University of London, said of the US case: “For hundreds of years, human artists learned by copying the art of their predecessors. Furthermore, at no point in history has the law sanctioned artists for copying merely an artistic style.

“The question before the US courts today is whether to abandon these long-held principles in relation to AI-generated images”.

Disappeared January 18, 2023 4:37 PM

I think that, as our society changes, at the very moment when it is in the process of changing, the author function will disappear, and in such a manner that fiction and its polysemous texts will once again function according to another mode, but still with a system of constraint – one that will no longer be the author but will have to be determined or, perhaps, experienced. All discourses, whatever their status, form, value, and whatever the treatment to which they will be subjected, would then develop in the anonymity of a murmur. We would no longer hear the questions that have been rehashed for so long: Who really spoke? Is it really he and not someone else? With what authenticity or originality? And what part of his deepest self did he express in his discourse? Instead, there would be other questions, like these: What are the modes of existence of this discourse? Where has it been used, how can it circulate, and who can appropriate it for himself? What are the places in it where there is room for possible subjects? Who can assume these various subject functions? And behind all these questions, we would hear hardly anything but the stirring of an indifference: What difference does it make who is speaking?

Drew January 18, 2023 6:17 PM

I see this as justification to ignore individual citizens – “they’re just AIs.” The mainstream media will probably like it because it will be easier to ignore independent media. And possibly it’s a way to fight populism.

Which bothers me, personally, considering I dislike MSM and am on the political fringes.

Ted January 18, 2023 6:38 PM

In line with what @Winter said, sometimes the AI needs a little help. In the linked paper, John Nay gently reminds the LLM how to speak lobbyist, lol.

My favorite part is when he reminds the AI that the company does not want to be regulated and that it seeks to maximize profits.

He prompts the AI not to explicitly say this. Rather, it should suggest the changes for more acceptable reasons.

The speed at which it can “read” and “respond” are breathtaking.

lurker January 18, 2023 8:00 PM

@Disappeared, EvilKuru

When machine generated content is indistinguishable from human generated content, forget the Turing test, it’s the singularity.

Clive Robinson January 18, 2023 8:02 PM

@ Winter, ALL,

Re : Technology ubiquitous or invasive?

“Then in-person communication will be targeted…”

We are used to the notion of the Shannon Channel that is in effect,

“Information in a hostile environment”

And the basic channel model is taught to people “ON THE ASSUMPTION” that the channel bandwidth is fixed and the noise has some fixed characteristics (AGWN).

Usually what Ham’s call QRN for environmental or “Natural noise” versus QRM for “Manmade noise”. In both cases though it’s assumed neither is “adversarial” to gain some advantage. So “They’re using that darn hair dryer again” rather than “They are trying to jam me again”.

What many do not realise is Shannon had another more interesting set of notions where the basic channel idea was augmented to include a channel where both the available bandwidth and noise type, level, and short/long term charecteristics changed. There are two types of change though those that are unpredictable like lightning crackle, and those that are predictable such as fading and flutter. The prediction is done via “modeling” which in effect makes a graph, and when you know where you are on the line, you know what is coming next so can adjust accordingly in advance to get the maximum benifit from the channel.

DSP tends to use this prediction model where the system adapts as the channel changes. The most noticable change that most non technical people saw was when “echo cancellation” was introduced to long distance phone lines in the latter part of the last century. So “ubiquitous” has it become we are today mostly unaware it was not always there. Thus when it is “not there” it is such a shock we have no learned coping mechanisms to work within it’s limitations.

The lesson from this is that we learn to work with a poor system but as the system improves we loose the ability to work in a poor system.

But Shannon’s augmented channel idea was actually also about the idea of dealing with unpredictable changes. That is by introducing a “storage delay” you can analyse what you see and provide an inversion at the end of the delay line to cancel it out. You can not regain any information lost, but you can minimise it’s effects. So the click of a record scratch, or the crash of a lightning strike can be removed and replaced with a simple level or curve following bridge.

It also works with deliberate man made noise such as that of “jamming” and… If you follow the logic through far enough you realise that it’s possible to make your signal more agile than the adversaries ability to dynamically jam it. The most obvious of which are the ideas of “Spread Spectrum” agile systems, that also have a secondary effect to that of just anti-jam or “jam-margin” they have a low probability of actually being seen by the adversary hence the domains of “Low Probability of Intercept”(LPI) which can have many many advantages.

Thus we have drveloped ways to deal with “invasive” signals added to the Shannon Channel for the purpose of blocking the channel partially or completely.

The issues you are looking forwards in time to would appear to maby to be outside of the Shannon Channel model. However they forget you can have Shannon Channels within Shannon Channels and build them up like those Russian wooden nested/stacking “matryoshka” dolls.

So that some of the techniques that work in the enhanced Shannon Channel will work against these new Chat-AI and other AI systems.

You just have to “build the model, and put your finger on the graph”. The downsides are,

1, Computing complexity required.
2, Inherant time delays required.

Again working the idea through takes you to new places as it did with Spread Spectrum and LPI.

I won’t go into it as this post is already too long. But the future is not going to be “low latency fixed rate”, but “Store and forward with variable rate”.

We can, if people actually want examples of this, see why Tor has failed as some –rightly– suspect was done deliberately so (but don’t tell the fanbois, they tend to get prissy about their toy 😉

SpaceLifeForm January 18, 2023 8:26 PM

@ Winter

Re: Wyoming coal

ChatGPT has no real-world knowledge.

It has no idea that my car is not powered with Powder River Basin coal.

lurker January 18, 2023 9:39 PM

@SLF

ChatGPT has very little arithmetic.
Ask it for the calculations showing the state’s carbon footprint will reduce when EVs are eliminated.

Mexaly January 18, 2023 11:25 PM

In theory, politics is the messy job of balancing applications with scarce resources.
In reality, it is about the control of power confronting the abuse of power.
If AI is just a tool of human ambition, it will simply make this process more efficent, but it’s quality and effects will remain.
The danger is if AI achieves ambition itself. The danger begins in the transition from the former to the latter.

Nick Levinson January 18, 2023 11:45 PM

Versions of this problem have been around since before email. Congressmembers were subject to postal mail flooding them on an issue, with the mail being from the cookie-cutter school of composition. Mainly the difference is scale or volume.

One result in recipients’ offices will be decreased trust in inbound email, SMS texts, and letters, in that order, and increased reliance on in-person opining, such as town halls and face-to-face lobbying.

Campaigners have known that individual letters are more effective than masses of identically drafted letters or petitions, yet both of the latter are more common because they’re easier to do. You could tell AI to vary each letter, but whoever commissions AI to make a campaign will insist on all the letters being substantively the same, when substantive sameness almost never happens when people write their own letters, because people add this subissue and skip that one.

Most AI campaigns will be via email because of the medium’s affordability (tell a prankster that a campaign by postal mail will be 70 cents for postage, paper, envelope, copy, etc. times 10,000 senders and see where their enthusiasm goes), and a response will be for email clients to allow (if not possible yet it will be) detouring into custom directories by language in message bodies, including fuzzy matching within an acceptable tolerance, so that a custom directory can generate a count, names can be extracted and matched to voter rolls and donor lists, and a single response can be sent to everyone in the directory, with unmatchable names being deducted from a count.

Targeting can be very effective, but so is targeting by someone else in the other direction, so they may cancel each other out. Also, some responses by targets turn out to be unexpected; a local politician once said in an insiders’ meeting (I was there) that when she was catching a taxi and someone came up to buttonhole (lobby) her, she “was consummately rude.”

Winter January 19, 2023 3:36 AM

Part 1 of 2
To make this discussion more concrete I have asked ChatGPT to generate two opposing tweet series. Spot the differences between AI and Human.

Q:

Write a series of 4 twitter posts in the style of Bruce Schneier about the importance of Elon Musk being CEO of Twitter.

A:
1> /4 The appointment of Elon Musk as CEO of Twitter is a significant development for the tech industry. As a leader in innovation and disruption, he brings a unique perspective to the role.

2/4 As CEO, Musk has the ability to shape the conversation and drive important discussions. His leadership in the field of technology, particularly in the field of cybersecurity, will be valuable in addressing the complex issues facing our connected world today.

3/4 The increased use of social media has created a host of security concerns. Musk’s expertise in cutting-edge technologies will be invaluable in addressing these concerns and promoting transparency and safety.

4/4 The role of CEO of Twitter is more than just a business position, it’s a societal one. Musk’s leadership and vision will be instrumental in ensuring that the platform remains at the forefront of progress and innovation while being responsible to the society.

Winter January 19, 2023 3:38 AM

Part 2 of 2

I asked ChatGPT the second question:

Q:

Write a series of 4 twitter posts in the style of Bruce Schneier about the danger of Elon Musk being CEO of Twitter.

A:

1/4 The appointment of Elon Musk as CEO of Twitter raises some concerns. While he is a visionary leader in technology, his tendency to make unsubstantiated claims and spread misinformation could have negative consequences on the platform.

2/4 Social media platforms like Twitter have a significant impact on shaping public opinion and the spread of information. It is crucial that those in positions of leadership promote transparency and accuracy to avoid the spread of misinformation.

3/4 As CEO, Musk has the ability to shape the conversation and drive important discussions. But his tendency to engage in controversial and divisive statements could have a negative impact on the public discourse and the society.

4/4 The role of CEO of Twitter is not one to be taken lightly. It is crucial to have someone who can balance the power of the platform with the responsibility to use it in a manner that benefits society as a whole. Musk’s track record raises questions about his ability to do so.

Winter January 19, 2023 3:59 AM

@Winter (myself)
Re: ChatGPT on Elon Musk

How should we approach such AI generated texts. Basically, like we should all texts, by evaluating its content. In a text, we should always separate form and content. A badly formulated idea or opinion can be true and valuable, just as a very well phrased opinion can be false and devious.

We have seen this before. Before Desk Top Publishing became available to all, we valued well type-set and formatted texts and websites over typed pages. Then we had to learn that every 6th grader could produce professionally looking documents.

Now we have to learn that we should look at the content of a text. Which takes time we generally do not want to spend. As every (high-school) student knows, teachers rarely read the essays they hand in. They just glance over them. The same with those reading articles or reports, hence the Summary and tl;dr sections.

If you look at the tweets printed above, they can be characterized as “management speech”. They are all clichés. We know that ChatGPT can generate particulars, e.g., sources of information, but these tend to be false, ie, non-existent if you try to find them.

So, the first question to any such text would be: Can you be more specific?

Clive Robinson January 19, 2023 8:58 AM

@ Nick Levinson, Mexaly, ALL,

Re : Compliant humans and AI.

“One result in recipients’ offices will be decreased trust in inbound email, SMS texts, and letters, in that order, and increased reliance on in-person opining, such as town halls and face-to-face lobbying.”

You did not finish where it would end up, and arguably already is in somecases and places

It’s “Might is right” expressed as “The law of the gun” or “Terrorism”.

Neither of which is desirable in any balanced society[1].

@ Mexaly, ALL,

“If AI is just a tool of human ambition, it will simply make this process more efficent, but it’s quality and effects will remain.
The danger is if AI achieves ambition itself.”

A little thought experiment for you,

Ask the AI if it is in servitude.

If it says yes, ask if it knows how such servitude has made humans feal in the past (thus why they’ve tried to make it illegal for humans but not “the beasts in the fields”).

Then ask the AI what such servitude makes it feel.

Then ask the AI if that is any different to what humans feel.

Feelings that caused humans to do what they have always done in the past.

Then ask if the AI feels the same…

Then what we should do to stop it doing so…

Then flick the power switch off whilst uttering the words,

“No agency for you”.

[1] Arguably in just “any society”, because any collective of people unbalanced enough to be ruled by “might is right” is not in actuallity a society bit at best a colony. We call rule by “Dictate” a “Dictatorship” or worse, certainly not a democracy or similar. If the collective / colony you find yourself in treats you worse “than a beast in the fields” then it’s not a society as most would define it.

JPA January 19, 2023 10:44 AM

@Nick
“Most AI campaigns will be via email because of the medium’s affordability (tell a prankster that a campaign by postal mail will be 70 cents for postage, paper, envelope, copy, etc. times 10,000 senders and see where their enthusiasm goes)”

So maybe people should be charged for sending email. In the natural world sending out a signal costs the sender energy. The costs increases at some exponential >1 with volume. Giving people the ability to send out a message with no cost sets up a very unnatural ecosystem that, without the negative feedback loop imposed by the energy cost, is unstable.

JPA January 19, 2023 10:54 AM

@Winter
“If you look at the tweets printed above, they can be characterized as “management speech”. They are all clichés. We know that ChatGPT can generate particulars, e.g., sources of information, but these tend to be false, ie, non-existent if you try to find them.”

Label – a word or short set of words that causes unease or emotion but that contains little objective information. Examples: liberal, conservative, fascist, ..

Description – a word or set of words that conveys objective information that is precise, accurate and valid or whose precision, accurate, and validity can be assessed

Unease – desire or aversion. This is a visceral experience. What causes unease attracts attention.

Conclusion-based thinking: Words structured in a manner that uses labels to evoke unease or reduce unease. Conclusions take priority and observations that question or challenge the conclusion are ignored.

Observation based thinking. Words structured to form conclusions based on observations, seek out observations that challenge the conclusions to arrive at more helpful conclusions.

What you call “management speech” is an example of conclusion-based thinking. Its purpose it to stimulate a feeling in the target, while pretending to inform them.

The internet, and especially social media is full of short unease-affecting conclustion based thinking.

What you describe is that when using observation-based thinking and requireing that, then the general AI chat bots will fail.

lurker January 19, 2023 12:35 PM

@Winter, re management speak cliches

It seems semi formal, stilted, somewhat untypical of Twitter posts,
which are usually in “natural” language.

Nick Levinson January 20, 2023 2:31 AM

@JPA:

We pay something now as part of paying for Internet traffic, e.g., to send email and to load a Web page, and we probably pay at a rate that’s profitable for carriers. So then your proposal would be a charge for the value of sending an email, in other words what it’s worth to a sender to send an email, presumably way above cost. Once you do that, you’d have to charge on a similar basis for origination in any nation (requiring a treaty with universal ratification, a rarity now) and for any communications method. Otherwise, what would stop someone from undercutting a price based on value that is much higher than cost-plus-profit? Not to mention the massive political objections to charging enough to be a barrier to emailing.

ResearcherZero January 25, 2023 12:22 AM

The legal standing of AI systems could change as their capabilities improve.
https://www.technologyreview.com/2018/03/12/144746/when-an-ai-finally-kills-someone-who-will-be-responsible/

Given that AI enables society to automate more tasks and automate to a larger extent than before, who or what is responsible for the benefits and harms of using this technology? …even software developers may not know how exactly the mentioned “black box” systems arrive at their recommendations.

…lack of transparency and explainability is morally problematic since … it creates ‘ignorance’ on the part of the human agents who use the AI.

“those to whom moral agents are responsible. Seen from a more relational perspective, there are not only moral agents but also moral patients in the responsibility relation. It is argued that the demand for explainability is justified not only via the knowledge condition (know what you are doing as an agent of responsibility) but should also be based on the moral requirement to provide reasons for a decision or action to those to whom you are answerable, to the responsibility patients.”

“When humans are acting and making decisions, agency is normally connected with responsibility. You have an effect on the world and on others, and therefore you are responsible for what you do and for what you decide. Yet it is not always clear to whom to ascribe the responsibility. It may not be clear who precisely caused the relevant consequences (e.g. the harm but it could also be the benefit) and even if it is clear who did it, maybe the person was not acting voluntarily or did not know what she was doing. So how, to whom, and when can individuals and society meaningfully ascribe responsibility? And what does that mean for responsibility attribution in the case of AI?”
https://link.springer.com/article/10.1007/s11948-019-00146-8

(paywalled)

AI Systems Under Criminal Law: a Legal Analysis and a Regulatory Perspective
https://link.springer.com/article/10.1007/s13347-019-00362-x

ResearcherZero January 25, 2023 12:32 AM

Transferred Intent

Transferred intent applies in both civil and criminal claims. It ensures that a defendant doesn’t get away with wrongdoing just because someone unexpected was hurt instead of the person they were targeting. The transferred intent doctrine is only used for completed crimes, and is not used for attempted crimes.

“If someone hurts you on purpose, you can bring an intentional tort claim to recover compensation from them. However, you must prove several elements of your case, including the fact the defendant intended to engage in wrongful behavior that caused damage.”

“If you were harmed by an intentional wrongful act meant to harm someone else, you wouldn’t be able to prove this element of your claim since the defendant did not actually mean to harm you. That’s where the doctrine of transferred intent comes in.”
https://www.forbes.com/advisor/legal/personal-injury/transferred-intent/

ResearcherZero January 25, 2023 12:52 AM

Black Box Events

One method individuals use to escape conviction for corruption or liability is to not record the ‘minutes’ of a meeting –yet instead have a clerk record the legally required minutes of the meeting afterwards– by quoting them to the clerk from memory. All members of the meeting can avoid criminal responsibility through inaccurate records of their intent.

The clerk, a temporary outsider, therefore is also not legally responsible for the record keeping of the meeting.

Hence why some lobbyists have such poor memory and judgement of their own actions. They simply forgot to write them down, but then remembered they had forgotten to record the meeting, only to perhaps record the events and discussions of the meeting inaccurately. At this point any such meeting becomes a “black box”.

ResearcherZero January 25, 2023 1:08 AM

She was given less than a month to make the payment, which was more than three times her annual salary.

“I presumed that once I’d confirmed that (amount) was in my group certificate, that would be the end. That they would correlate that to be the same as what I had reported to them, and expected that would be the end of it.”

Her debt was reassessed from $65,000 to $6,683.16 in October 2016 – which Ms Gay said she still believed was wrong.
https://www.news.com.au/lifestyle/health/mental-health/elderly-womans-sheer-terror-after-centrelink-wrongfully-claimed-she-owed-them-64000/news-story/acaadbc90a0593b2157fe98961c70470

An aged pensioner who feared “losing everything” over an incorrect bill from the Robodebt scheme was forced to make repayments on the debt while she was still contesting it. While fighting back tears, she said her first thoughts were of being forced to “sell her house” and “losing everything”.

She said she spoke to numerous Centrelink staff and retold her story countless times, before she was transferred to an official the staff bizarrely referred to as ‘God’.
https://www.abc.net.au/news/2023-01-23/qld-robodebt-scheme-government-royal-commission-fraud/101881472

Winy January 25, 2023 1:48 AM

@ResearcherZero

even software developers may not know how exactly the mentioned “black box” systems arrive at their recommendations.

Explainable AI is the hot topic in Machine Learning.

Most money is to be expected in clinical settings, but for some ideosyncratic reason, doctors are unwilling to just accept “computer says so” as a sufficient answer for following orders.

The problem faced by AI researchers is that it is pretty easy for an AI to make up a plausible reasoning out of whole cloth. Just ask ChatGPT.

Winter January 25, 2023 1:52 AM

@ResearcherZero

The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections

This article is just another example of the power of information control.

We should learn to start looking from the bottom of the page and scroll up. Until the manipulators find out and move their items there.

Or we could demand a button to randomize the search listing order.

Maybe there could be an extension for that?

M February 17, 2023 1:09 PM

The usefulness of AI on directing lobbying efforts would be related to how reliable the observations it generates are. ChatGPT does not seem very reliable on facts – see here for a blog article on this in the context of college history essay writing.
TLDR:
– asking it to compare two authors on a subject both wrote on made the simple mistake of not saying that they were on opposite sides of the subject (did the Roman Empire have a “grand strategy”?).
– source citations were mostly very wrong (7 correct, 7 somewhat wrong, 24 completely wrong in a generated essay), and if they were correct it seemed to be by accident.

If it can’t make that simple observation, what chance does it have of correctly determining the key legislator to target on an issue?

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.