AI to Aid Democracy

There’s good reason to fear that AI systems like ChatGPT and GPT4 will harm democracy. Public debate may be overwhelmed by industrial quantities of autogenerated argument. People might fall down political rabbit holes, taken in by superficially convincing bullshit, or obsessed by folies à deux relationships with machine personalities that don’t really exist.

These risks may be the fallout of a world where businesses deploy poorly tested AI systems in a battle for market share, each hoping to establish a monopoly.

But dystopia isn’t the only possible future. AI could advance the public good, not private profit, and bolster democracy instead of undermining it. That would require an AI not under the control of a large tech monopoly, but rather developed by government and available to all citizens. This public option is within reach if we want it.

An AI built for public benefit could be tailor-made for those use cases where technology can best help democracy. It could plausibly educate citizens, help them deliberate together, summarize what they think, and find possible common ground. Politicians might use large language models, or LLMs, like GPT4 to better understand what their citizens want.

Today, state-of-the-art AI systems are controlled by multibillion-dollar tech companies: Google, Meta, and OpenAI in connection with Microsoft. These companies get to decide how we engage with their AIs and what sort of access we have. They can steer and shape those AIs to conform to their corporate interests. That isn’t the world we want. Instead, we want AI options that are both public goods and directed toward public good.

We know that existing LLMs are trained on material gathered from the internet, which can reflect racist bias and hate. Companies attempt to filter these data sets, fine-tune LLMs, and tweak their outputs to remove bias and toxicity. But leaked emails and conversations suggest that they are rushing half-baked products to market in a race to establish their own monopoly.

These companies make decisions with huge consequences for democracy, but little democratic oversight. We don’t hear about political trade-offs they are making. Do LLM-powered chatbots and search engines favor some viewpoints over others? Do they skirt controversial topics completely? Currently, we have to trust companies to tell us the truth about the trade-offs they face.

A public option LLM would provide a vital independent source of information and a testing ground for technological choices with big democratic consequences. This could work much like public option health care plans, which increase access to health services while also providing more transparency into operations in the sector and putting productive pressure on the pricing and features of private products. It would also allow us to figure out the limits of LLMs and direct their applications with those in mind.

We know that LLMs often “hallucinate,” inferring facts that aren’t real. It isn’t clear whether this is an unavoidable flaw of how they work, or whether it can be corrected for. Democracy could be undermined if citizens trust technologies that just make stuff up at random, and the companies trying to sell these technologies can’t be trusted to admit their flaws.

But a public option AI could do more than check technology companies’ honesty. It could test new applications that could support democracy rather than undermining it.

Most obviously, LLMs could help us formulate and express our perspectives and policy positions, making political arguments more cogent and informed, whether in social media, letters to the editor, or comments to rule-making agencies in response to policy proposals. By this we don’t mean that AI will replace humans in the political debate, only that they can help us express ourselves. If you’ve ever used a Hallmark greeting card or signed a petition, you’ve already demonstrated that you’re OK with accepting help to articulate your personal sentiments or political beliefs. AI will make it easier to generate first drafts, and provide editing help and suggest alternative phrasings. How these AI uses are perceived will change over time, and there is still much room for improvement in LLMs—but their assistive power is real. People are already testing and speculating on their potential for speechwriting, lobbying, and campaign messaging. Highly influential people often rely on professional speechwriters and staff to help develop their thoughts, and AI could serve a similar role for everyday citizens.

If the hallucination problem can be solved, LLMs could also become explainers and educators. Imagine citizens being able to query an LLM that has expert-level knowledge of a policy issue, or that has command of the positions of a particular candidate or party. Instead of having to parse bland and evasive statements calibrated for a mass audience, individual citizens could gain real political understanding through question-and-answer sessions with LLMs that could be unfailingly available and endlessly patient in ways that no human could ever be.

Finally, and most ambitiously, AI could help facilitate radical democracy at scale. As Carnegie Mellon professor of statistics Cosma Shalizi has observed, we delegate decisions to elected politicians in part because we don’t have time to deliberate on every issue. But AI could manage massive political conversations in chat rooms, on social networking sites, and elsewhere: identifying common positions and summarizing them, surfacing unusual arguments that seem compelling to those who have heard them, and keeping attacks and insults to a minimum.

AI chatbots could run national electronic town hall meetings and automatically summarize the perspectives of diverse participants. This type of AI-moderated civic debate could also be a dynamic alternative to opinion polling. Politicians turn to opinion surveys to capture snapshots of popular opinion because they can only hear directly from a small number of voters, but want to understand where voters agree or disagree.

Looking further into the future, these technologies could help groups reach consensus and make decisions. Early experiments by the AI company DeepMind suggest that LLMs can build bridges between people who disagree, helping bring them to consensus. Science fiction writer Ruthanna Emrys, in her remarkable novel A Half-Built Garden, imagines how AI might help people have better conversations and make better decisions—rather than taking advantage of these biases to maximize profits.

This future requires an AI public option. Building one, through a government-directed model development and deployment program, would require a lot of effort—and the greatest challenges in developing public AI systems would be political.

Some technological tools are already publicly available. In fairness, tech giants like Google and Meta have made many of their latest and greatest AI tools freely available for years, in cooperation with the academic community. Although OpenAI has not made the source code and trained features of its latest models public, competitors such as Hugging Face have done so for similar systems.

While state-of-the-art LLMs achieve spectacular results, they do so using techniques that are mostly well known and widely used throughout the industry. OpenAI has only revealed limited details of how it trained its latest model, but its major advance over its earlier ChatGPT model is no secret: a multi-modal training process that accepts both image and textual inputs.

Financially, the largest-scale LLMs being trained today cost hundreds of millions of dollars. That’s beyond ordinary people’s reach, but it’s a pittance compared to U.S. federal military spending—and a great bargain for the potential return. While we may not want to expand the scope of existing agencies to accommodate this task, we have our choice of government labs, like the National Institute of Standards and Technology, the Lawrence Livermore National Laboratory, and other Department of Energy labs, as well as universities and nonprofits, with the AI expertise and capability to oversee this effort.

Instead of releasing half-finished AI systems for the public to test, we need to make sure that they are robust before they’re released—and that they strengthen democracy rather than undermine it. The key advance that made recent AI chatbot models dramatically more useful was feedback from real people. Companies employ teams to interact with early versions of their software to teach them which outputs are useful and which are not. These paid users train the models to align to corporate interests, with applications like web search (integrating commercial advertisements) and business productivity assistive software in mind.

To build assistive AI for democracy, we would need to capture human feedback for specific democratic use cases, such as moderating a polarized policy discussion, explaining the nuance of a legal proposal, or articulating one’s perspective within a larger debate. This gives us a path to “align” LLMs with our democratic values: by having models generate answers to questions, make mistakes, and learn from the responses of human users, without having these mistakes damage users and the public arena.

Capturing that kind of user interaction and feedback within a political environment suspicious of both AI and technology generally will be challenging. It’s easy to imagine the same politicians who rail against the untrustworthiness of companies like Meta getting far more riled up by the idea of government having a role in technology development.

As Karl Popper, the great theorist of the open society, argued, we shouldn’t try to solve complex problems with grand hubristic plans. Instead, we should apply AI through piecemeal democratic engineering, carefully determining what works and what does not. The best way forward is to start small, applying these technologies to local decisions with more constrained stakeholder groups and smaller impacts.

The next generation of AI experimentation should happen in the laboratories of democracy: states and municipalities. Online town halls to discuss local participatory budgeting proposals could be an easy first step. Commercially available and open-source LLMs could bootstrap this process and build momentum toward federal investment in a public AI option.

Even with these approaches, building and fielding a democratic AI option will be messy and hard. But the alternative—shrugging our shoulders as a fight for commercial AI domination undermines democratic politics—will be much messier and much worse.

This essay was written with Henry Farrell and Nathan Sanders, and previously appeared on Slate.com.

EDITED TO ADD: Linux Weekly News discussion.

EDITED TO ADD: This post has been translated into Hebrew.

Posted on April 26, 2023 at 6:51 AM58 Comments

Comments

Robert Jackson April 26, 2023 8:52 AM

Were we governed by angels, this would not be necessary. Since we are not, it will not achieve your desired goals and the Orwellian risks are frightening.

Clive Robinson April 26, 2023 9:37 AM

@ Robert Jackson, A

Re : We are what we are, imperfect at best.

“Were we governed by angels…”

There are no angels, just as there are no demons. Neither a God or a Devil, saint or siner, just humans and our desires to be something than what we currently are.

Thus we strive for both “Good and Bad” they are oft the same thing, the purpose of which is chosen by a Directing Mind, and judged by a supposadly impartial body of observers, each with their own imperfect view of any truth of actuality. But impartiality is not possible because we are creatures of a herd, we call society, with more, morale, ethics, regulations and legislation to ensure we follow a path the destination of which is unknown.

It is important to realise why the destination is unknown. It is because of both the tyrany of the masses and the dictatorship of the few, each occuring we know not when like the buffers bouncers and pins in a pin ball machine where the ball has no vision, it is sent this way and that by forces it can not understand under the imperfect guidence of a directing mind that can not be sensed.

Thus we invent gods and devils, saints and sinners, and much inbetween to try and invent understanding where non can be found nor proved.

No AI will be saint or sinner, deamon or angel, and never God or Devil unless we are monumentally stupid enough to surrender or freeds of thought, movment, association and all that comes from them. The price we pay is a heavy one and few would chose it, and those that take it on for others should always have their motives checked and monitored.

What is that price? It’s simple, it’s responsability to think and act not for ourseves, or a few we care for, but for all, to ensure the tide lifts all and equity is for all.

random nobody April 26, 2023 11:21 AM

Unless the AI is allowed to scrub the web indefinitely, and no filters are placed on it’s output, it will be staunchly undemocratic. Any attempt to implement political correctness is a guaranteed way to bias the AI and render it a mouthpiece of the government rather than a tool for the people. We all saw what happened with the “Laptop from Hell”

Clive Robinson April 26, 2023 11:55 AM

@ Bruce, ALL,

‘We know that LLMs often “hallucinate,” inferring facts that aren’t real. It isn’t clear whether this is an unavoidable flaw of how they work, or whether it can be corrected for.’

In the current state of the game the hallucinations are “an unavoidable flaw”.

Rhe reason is that “inferring facts that aten’t real” is an expected failling of the way LLM’s are trained with only partial knowledge.

That is an LLM has no way to test against reality and at best has only a fraction of knowledge given as training data.

Thus if it has from it’s training data set two or more equiprobable answers to a question even though conflicting the LLM will chose one of them at random (if not both). It does this because with no knowledge if fact or fiction it has little choice other than to chose at random. So the two parts of “stochastic parrot” are forefilled. That is the random selection of “stochastic” and the prototype sentances of “parrot”.

The problem is you can not just renove “false sentences” or “sentence prototypes” because whilst they may be false under one question they are not under another.

The LLMs whilst they do have some context recognition, do not have sufficient tight context ability.

In part this is due to a human failing…

Scientific papers for instance are almost always about “success” not “failure” therefore they in no way represent a true picture of the actual state of research in any one domain of knowledge. Train an LLM on them and it is going to be biased.

Anyone who has worked in a certain type of managment lead organisation knows that,

“Failure that can not be hidden is not recorded, and failure that can be disguised always is, thus true success is never seen for what it is.”

As I’ve indicated in the past only around one in ten very minor product innovations make it as successful products. The greater the level of innovation the less chance a new product has to succeed at any given point in time. More often than not it fails, yet later a similar product does succeed. That is there has to be sufficient need for the product for it to have a realisable market.

But managment fail to acknowledge this inconvenient truth, so they lie about it… Or if you prefere “they hallucinate about their success”.

Oh and another failing of much of managment is,

1, They do not want to know about problems.
2, They only want to hear about solutions.
3, They do not want to chose a solution as that involves responsability.

Now try and imagine an LLM as a tool for managment… I don’t think,

“Garbage in, Garbage out.”

Will come even close.

Jordan Brown April 26, 2023 12:25 PM

Sounds like a great idea, because governments have such great records of never squashing inconvenient ideas and of producing high quality software.

modem phonemes April 26, 2023 12:48 PM

@ Clive all

we invent gods … where non can be found nor proved

The question in reason is whether God exists, and natural reason proves the answer is affirmative. The argument is based on the existence of things, and is expounded in Aquinas.

The demonstration of Aquinas proceeds from the existence of sensible things. The pattern is clearcut. Existence is not contained within the natures of sensible things, it comes to them from an efficient cause, and ultimately from subsistent existence, that is, something whose nature is to exist. This is God. The nerve of the argument is that potentiality is actualized only by something already in actuality.

  1. Owens, Joseph. St. Thomas Aquinas on the Existence of God: Collected Papers of Joseph Owens. 1980 State University of New York

responsibility to think and act not for ourseves, or a few we care for, but for all

This is highly laudable. But an item, perhaps the first item, on the agenda of this vision is to answer the question whether there is a God, and what if any God’s relation is to humanity, and what moral good is thereby imposed. Only the truth will make one free. Otherwise we are imposing something vitiated by falsehood and little good will come from that.

AL April 26, 2023 12:49 PM

All I see out of ChatGPT is a refinement of the “talking points” marketing technique. Instead of multiple people reading from the same script, which makes it obvious that a single entity is orchestrating the talking points, these actors/politicians will get an individualized script that will make it less obvious that one entity is calling the shots.

Givon Zirkind April 26, 2023 2:28 PM

AI by gov’t to help the people? Really? Like all the other gov’t projects for public good and “bright ideas”. All that good gov’t information. So, correct. So, helpful. Enlightening & educating. But, so often discredited decades later after the harm is done. What could possibly go wrong? Just look at where we are now.

lurker April 26, 2023 2:45 PM

@random nobody, @All

“scrubbing the web indefinitely, with no filters”

reveals the great fallacy of modern times that the web is the fount of all knowledge. Indeed a great amount of material, written, graphic art, and sound and video recordings has been put on the web, especially to the ever-expanding archivedotorg.

But the quality of some of that is definitely hallucinatory. Look at some of the books printed pre-photolithography. Some of these are available only as pdf of the page image. Some have valiant but futile attempts at OCR presented as “Full Text”, consisting of pages of gibberish.

Then there is the thorny matter of copyright. Current LLMs claim to filter this out. Some new writing never appears on the web because even now it is printed only on paper. Some appears in corners of the web where honest citizens fear to tread. Some appears, briefly, on the open web, then disappears under takedown notice.

Do, or how do, LLMs interpret sound and video material? How can they determine the intellectual content of graphic art? Can AI read a technical drawing the way a human engineer does? It seems there is no immediate danger of AI becoming omniscient, and those who believe it is are foolish.

Michael Wojcik April 26, 2023 3:59 PM

I’ll just touch on one point: LLM as expert explainer.

While humans are very much imperfect explainers – error-prone, biased, difficult to evaluate, often duplicitous, etc – turning to an LLM to explain a complex subject is an inferior solution.

Cooperative, knowledgeable humans attempting to explain a complex subject will operate under models of their interlocutors. That includes considering audience – a critical rhetorical dimension that LLMs have no insight into except for query context (which is often minimal, and for all practical models quite small at the limit). It includes attempting to understand the questioner’s intent, which often suggests that the real question is something other than what was literally asked.

Perhaps most importantly, human interlocutors drift off topic. LLMs, at least unidirectional deep-transformer-stack LLMs like GPT-x, suffer badly from a “narrow nerd” problem: they append tokens based on the gradient of whatever small pocket of parameter space the context pushed them into, with a bit of noise (the “temperature” setting) to add a little annealing. If they go “off topic” it’s due to hallucination or GoldMagiKarp-style “bad tokens” leading them into some sort of nonsense they were accidentally trained on.

A human expert will generally start on an explanation and wander off into other areas of interest to the expert, which the questioner may never have heard of. Human conversation, when it’s productive, is serendipitous. (From an information-theory point of view this is necessary: the information content of the conversation is what the listener didn’t expect, what’s surprising.) LLMs are just good at turning a large amount of data (of dubious quality) into plausible text. There are no qualia behind it. No preferences, no interests, no speculations.

Doug April 26, 2023 5:48 PM

Completely naive to thing 1) that the government has the competency to do this, and 2) that what will come out of the bureaucracy will be free from bias. There is zero chance that any AI can be unbiased – simply by including or excluding sources, the corpus will have an inherent tilt.

The place for government in this space is three fold:
1) Ensure that AI scrapers have legitimate access to content (it’s way beyond free use)
2) Require disclosure of the data sources used in the model (right along side the results)
3) Protect individuals and other entities from fraudulent and inaccurate content by holding the platforms legally and financially responsible for the content they generate. Generate a fake video of anyone saying something they didn’t? You’re liable for defamation.

JonKnowsNothing April 26, 2023 5:59 PM

@lurker, @Clive, All

re: the great fallacy of modern times that the web is the fount of all knowledge

Case in point:

The fount of all knowledge, aka WikiP, is a user edited database of encyclopedia type knowledge. It has limitations on what you can push on that site. Anyone can edit pages, however there are some that are “hot topic” ones, where edit wars take place.

Edit wars, are revisions, redactions, restores normally in quick sequence. A good number of topics no longer have a perma editor, so The Last Edit Wins. Lots of editors have died, lots have lost interest in a topic, lots have given up on framing topics to the likes of the Higher Ranked Editors (oxford comma wars). So lots of the pages are “stale”. They do not necessarily reflect the current state of any topic.

Sometimes, if there’s an edit war in progress, the time of viewing will depend on what you see. Going back even moments later, can give an entirely new page, totally redesigned with revised text and content.

Very Generally, an edit war takes place when 2 or more editors wish to restrict proposed content. The new content is redacted and unless another editor restores it, it may never been seen. Some stuff is normal spam that needs removal. Other times it’s a hot topic where one editor promotes a different view than another editor. This maybe an individual or a group of determined editors that rage over the content. Again, the type of content is highly restricted to encyclopedia type content. Governments globally monitor and alter topics that are unfavorable to them.

  • Starlight Tours Edit War

Starlight Tours refers to a practice by Canadian Police in the City of Saskatoon, in the Province of Saskatchewan of picking up people (men and women) on freezing nights, then driving them to a remote area and leaving them there to freeze to death from 1976-2000+.

Victims died from hypothermia.

Between 2012 and 2016, the “Starlight tours” section of the Saskatoon Police Service’s English Wikipedia article was deleted several times.

On March 31, 2016, the Saskatoon StarPhoenix reported that “Saskatoon police have confirmed that someone from inside the police department deleted references to ‘Starlight tours’ from the Wikipedia web page about the police force.” According to the report, a “police spokeswoman acknowledged that the section on starlight tours had been deleted using a computer within the department, but said investigators were unable to pinpoint who did it.”

AI has an imperfect memory. Humans also have imperfect memories but our memories have longer duration. The families of the victims do not forget, whether or not it’s in WikiP, the memories remain.

vas pup April 26, 2023 6:24 PM

@All

How government AI is going to be developed? Government hopefully with transparent and open to the public competition will assign to create and maintain government AI to the same private companies. For now are there other options?

Musk suggested to create Truth GPT open to the public:
link is: https://tuckercarlson.com/elon-musk-i-want-a-maximum-truth-seeking-ai/

So, there is no such thing as independent AI: it’ll depend either on big corporations or government agenda. The latter may changed with election cycle. See, I am not against government per se, just against dysfunctional government running by mediocre folks and very often selected not by merits as primary criteria.

In a past Government proved to be effective: e.g. Apollo missions, Manhattan project.

Directly on the subject:

AI creators must study consciousness, experts warn +++
https://www.bbc.com/news/technology-65401783

“An open letter signed by dozens of academics from around the world calls on artificial-intelligence developers to learn more about consciousness, as AI systems become more advanced.
“It is no longer in the realm of science fiction to imagine AI systems having feelings and even human-level consciousness,” it says.

!!!Most experts agree AI is nowhere near this level of sophistication.
But it is evolving rapidly and some say developments should be paused.

The term AI covers computer systems able to do tasks that would normally need human intelligence. This includes chatbots able to understand questions and respond with
human-like answers, and systems capable of recognizing objects in pictures.

Generative Pre-trained Transformer 4 (GPT-4), an AI system developed by ChatGPT chatbot creator OpenAI, can now successfully complete the bar exam, the professional qualification for lawyers, although it still makes mistakes and can share
misinformation.

The Association for Mathematical Consciousness Science (AMCS), which has compiled the open letter, titled “The responsible development of AI agenda needs to include consciousness research”, said it did not have a view on whether AI development in general should be paused.

=>But it pushed for a greater scientific understanding of consciousness, how it could apply to AI and how society might live alongside it.”

I hope many bloggers could read it before post is sanitized.

Clive Robinson April 26, 2023 6:48 PM

@ modem phonemes,

Re : Man making God in his own image.

“The argument is based on the existence of things, and is expounded in Aquinas.”

I’ve never had much respect for Aquinas’s Five Ways to establish the existence of God which comes from the begining of “Summa theologiae”.

1, Like cannot move like.
2, Non efficient cause.
3, Universe exists for a reason
4, Maximally being from maximally Good, Noble, and True.
5, The intelligent creator.

I know that some theological scholars think the five ways is an argument for a uniquely Christian God… But I find it fails the simple logic tests, thus has to fall back on the “To not believe is not to have faith” idiocy.

But when you look at the arguments they are mostly the nonsenses of “first cause”, “Maximally Good”, or “intelligent designer”.

The argument for “First Cause” fails to the “Turtles all the way up” problem. That is,

1, Who created God?
2, If a God can not create it’s self

Then something greater than the God created the God which takes you into the “there is always one more number” argument that gives us infinity, hence Turtles all the way up.

The notion of “good or bad” is not a measure of any definition that makes sense or is actually measurable with a scale. So there can be “maximal”

As for “intelligent designer” it fails to the “who designed the designer” or “There is always one more” argument.

But the most fundemental failing of Aquinas’s Five Ways to establish the existence of God, is it argues from effect to cause, which is a serious no no. I’ve pointed this out many times in the past when pointing out the so called “Forensic Science” as practiced is not science.

So sorry holding up Aquinas’s Five Ways to establish the existence of God is akin to saying the Earth is Flat and it’s been designed and built by a deity in seven days just a few thousand years ago, which most rational people reject for good reason.

All Aquinas’s Five Ways to establish the existence of God do is in effect prove,

“Man made God in his likeness, without rational thought.”

Thus why invent a deity, well when we alledgedly worshiped the Sun, this had the saving grace that the energy for life to exist on Earth comes from the Sun… After that most deities were either a “long con” or an “Authoritarian stick to beat the populous with”.

A simple belief in mankind and mankinds abilities as a social creature to act mostly in societies best interests is enough. But then the long con and associated opression and thuggery of the “King Game” would nolonger work.

Clive Robinson April 26, 2023 7:15 PM

@ vaspup,

I would not use anything involving the words of,

Tucker “the fake news crank” Carlson or his boss Rupert “The bear faced lier” Murdoch.

Because they’ve effectively admitted spewing fake-news to the point it was slander and paid 3/4billion USD rather than get immolated in court.

As far as I’m aware that 3/4billion is not going to stop other slander cases which if they do go to court are likely to cause no end of further embarrassment for Carlson and Murdoch…

Mind you it will be intetesting to see if the cheque actually cashes… As getting a sum of money that large together at short notice is usually not a case of pull the petty cash box out of the accountants desk draw…

The sad thing is that the people who are going to end up paying the bill is likely to be the US Gov as Rupert will no doubt find a way to make it tax deductable or similar…

XYZZY April 26, 2023 11:31 PM

A very long time I set in a small lab where we built a prototype of a system that would have over 100 9Gb hard drives connected to a single Windows computer chatting with Bill Gates. We could all see how this would keep scaling up. Part of the plan wa to index bunch of text and we used outside software to split the index among several computers, send a query to them all and collect the results. It was a significantly large text database that you could get some interesting results. I commented that we could also build an index of the internet or scan in a library of books. Looking further into the future I supposed you could build a machine that knew everything. Microsoft passed on the idea of an experiment continually pushing the limits of storage size and indexing. Does society now have a machine that knows everything? Are we close?

Zick April 27, 2023 4:47 AM

Just to repeat what others already said-

“…That would require an A.I. not under the control of a large tech monopoly, but rather developed by government and available to all citizens”

This already sounds alarming. Why not to limit the government involvement with monopoly issues only?

modem phonemes April 27, 2023 8:15 AM

@ Clive

Re: five-a-side

The “five ways” as presented by Aquinas are really just one same way. They present the same underlying argument in terms of five observable aspects of reality. The center common to all is as said above: “ Existence is not contained within the natures of sensible things, it comes to them from an efficient cause, and ultimately from subsistent existence, that is, something whose nature is to exist.”

The argument is purely natural reasoning from things; there nowhere an appeal to faith.

There is no “Turtles all the way up, who made God” objection. The argument establishes a being whose nature is to exist. That is the end; God was not made, God simply is.

In this case the argument from effect to cause succeeds. Any argument from finite natural objects and their formal characteristics would lead no further than to another finite natural object. However, from the starting point of existence, which is not a a nature (part of the form) in finite things, one can argue cogently to a being for which existence is the nature.

Trull April 27, 2023 8:47 AM

The problem with these discussions about “ai and democracy” is that the premises are flawed.

A. LLMs are not AI, but are being framed as being comparable to AGI by people invested in them under the guise of altruistic caution i.e. the notable recent open letter on AI development (ya’ll know which one I mean). See Emily Bender (cited wrongly in the open letter) stochastic parrots paper etc. The idea that llms are human competitive is simply not true, and is a consequence of people implicitly injecting their own meaning into the output of llms as if there were another person on the other end. Also consider that ChatGPT replies with the “three dots” “typing” interface. This is an intentional design decision to create the illusion that you are interacting with something intelligent and conscious that has intent. It does not. This concern-trolling request for government regulation that obviously won’t come only serves one actual purpose. “Investors who read Time and the WSJ! Look! I have something that is going to completely restructure the foundations of the economy and society! Better give me money now, or you’ll miss out!”

B. The simplistic idea that democracy need only constitute the ability to vote with correct, true, provided information on hand. That is obviously important, but consider that for most people’s lives in the western world, particularly the U.S. with our questionable labor rights in many states, much of your day is spent in a pseudo-dictatorship anyway. Ask anyone who works retail or food service if they have to ask permission to use the restroom.

Hard to say we live in a free society or democracy if people can have their healthcare taken away for using the restroom during the lunchtime rush! When one considers that LLMs are most capable of automating or streamlining a great number of office jobs, which are a source of regular, stable, middle-class income for people who would otherwise be in more working class service-industry jobs, it becomes clear that the “threat to democracy/freedom” posed by llms is already here and it isn’t a spooky sci-fi story being told by Musk et. al. but a simple case of capital replacing labor without an appropriate redirection of that labor power.

Winter April 27, 2023 11:06 AM

@modem

The question in reason is whether God exists, and natural reason proves the answer is affirmative.

Posing an undecidable question at the start is probably intended to install your personal faith as the answer.

Logic cannot answer questions about the empirical world. For every empirical question, eg, what is the color of swans, do dinosaurs still roam the earth, there are a multitude of valid answers that all could be true from a logical perspective (white or black, depending on species, dinosaurs once did roam the earth, but do not do so now, unless you include birds).

Whether God exists is either an empirical question, if the existence of God has empirical consequences. Or, if God’s existence does not have empirical consequences, it is entirely a question of faith.

Neither case requires a logical proof.

modem phonemes April 27, 2023 11:14 AM

@ vas pup

It is no longer in the realm of science fiction to imagine AI systems having feelings and even human-level consciousness

Consciousness is just a kind of knowledge.

Knowing, to be true to reality i.e. the things known, requires the immaterial reception of the form of the things. There is no way to explain it as material reception of form. This now implies an immaterial soul. (Animals also have immaterial souls but not immortal souls, since they do not exhibit knowledge of abstract truth.)

Machines don’t have souls unless God gives them such.

So AI can never be expected to have consciousness.

Winter April 27, 2023 11:49 AM

@Trull

LLMs are not AI, but are being framed as being comparable to AGI by people invested in them under the guise of altruistic caution i.e. the notable recent open letter on AI development (ya’ll know which one I mean).

The funny thing is, LLMs are getting much more out of the text than anyone ever expected. It seems that our use of words captures a lot of the structure of our thinking and the way humans perceive the world.

Those who belittle LLMs generally have an overly simplistic idea of how language use reflects our minds workings.

See Emily Bender (cited wrongly in the open letter) stochastic parrots paper etc.

Emily Bender’s papers have been discussed extensively elsewhere [1]. She falls into a few fallacies. Mainly, understanding and intelligence do not require consciousness. Also, LLMs do not just capture syntax (word order), but also semantics (meaning) in a way that allows LLMs to reconstruct eg, color triangles from word use. [2]

[1] ‘https://www.schneier.com/blog/archives/2023/03/friday-squid-blogging-were-almost-at-flying-squid-drones.html/#comment-418916

[2] ‘https://link.springer.com/article/10.1007/s11023-023-09622-4

modem phonemes April 27, 2023 11:57 AM

@ Winter

Who says it’s an undecidable question ? As well it has been stated that the argument in question is based on what we know of natural things, not an implicit or other appeal to a faith.

The argument is not pure logic. Of course, like all reasoning it uses logic, but is “empirical” in that it starts in what we know about natural things.

God’s existence does have empirical consequences. If God did not exist, nothing else would either. 😉 This does require a demonstration because it is not immediately evident.

Winter April 27, 2023 12:10 PM

@modem

If God did not exist, nothing else would either. 😉

Then we end in Spinoza where
God == the Universe.

I suspect this is not the conclusion you favor.

Winter April 27, 2023 12:25 PM

@modem

Animals also have immaterial souls but not immortal souls, since they do not exhibit knowledge of abstract truth.

1.) You do not know this. You repeat opinions of uninformed, prejudiced historical authorities

2.) You severely underestimate the capabilities and diversity of animals and live in general

modem phonemes April 27, 2023 1:20 PM

@ Winter

we end in Spinoza

The argument I’ve outlined doesn’t lead to identifying God with the universe. Rather it is saying if God did not exist, there would be nothing to cause the universe.

repeat … uninformed, prejudiced historical authorities … severely underestimate …

No, I summarize cognitional arguments. No one has shown that non-human animals are capable of abstract knowledge, i.e. knowledge beyond the sensory. No animal e.g. has exhibited that is understands the abstract mathematical “6” though they may be able to recognize 6 physical things.

Trull April 27, 2023 2:33 PM

@ Winter
Whether LLMS are “getting more out of the text then anyone expected” is in the eye of the beholder. It stands to reason that if you feed it larger and larger datasets, it will mimic patterns that aren’t obvious to humans, because that’s what it is designed to do.
You misunderstand my criticism of AI. I belittle LLMs not because I have a simplistic idea of how language use reflects our minds inner workings, but because I have a complex view of cognition and do not believe it is something that can be captured with probabilistic models exclusively in the domain of text-based “language.” If anything, LLM boosters have unsophisticated understanding and views of human communication and language. LLMs do not capture semantics. Meaning =/= probability of next word/character. Meaning requires intent. If LLMs were capturing meaning, they would not “hallucinate.” (loaded term but I must work with what we are given.) An LLM can not “understand” because language reflects sensory experience that an LLM does not have and cannot have.

Winter April 27, 2023 4:38 PM

@Trull

It stands to reason that if you feed it larger and larger datasets, it will mimic patterns that aren’t obvious to humans, because that’s what it is designed to do.

That is too simplistic. Linguists are not stupid. They know the difference between “mimicking patterns” and reproducing deep structures of language.

When LLMs are able to map two languages good enough to translate between them, without a Rosetta stone (ie, without parallel texts), then they capture something deep. And they do that between text and speech too. Those features were entirely unexpected.

LLMs do not capture semantics.

Sorry, but that is simply not true. Read the second link I posted.

Winter April 27, 2023 4:52 PM

@modem

Rather it is saying if God did not exist, there would be nothing to cause the universe.

How is that different from “if the universe did not exist, God would not exist”?

You define God as the creator of the universe, but what else is “he”? Are there god’s without a universe?

No one has shown that non-human animals are capable of abstract knowledge, i.e. knowledge beyond the sensory.

You should read the books of Frans de Waal. In general, you have not kept up with Ethology (the science of animal behavior in their natural habitat).

PattiM April 27, 2023 6:02 PM

This is someone’s grand idea for LLM’s. Most players only care about how to make money. Legislation is the only tool to fight back (protect rights), and in defending most folks, it’s a pretty weak tool, indeed.

JonKnowsNothing April 27, 2023 7:02 PM

@PattiM, All

re: Legislation is the only tool to fight back (protect rights)… it’s a pretty weak tool

In the case of illiberal-neocon-libertarian economies, it’s no tool at all. Legislation is subject to whim of enforcement, contrarian enforcement, and omitted enforcement.

Every country has thousands of laws, some the same as other countries and many different between each other. Yet enforcement by governments and police forces remains problematic. It’s a swing from over zealous to oblivious enforcement.

If there isn’t a Asset Recovery Aspect enforcement drops to near nil.

Lots of enforcement laws in the USA are based on “monetary recovery”. This means that there is something the accused has, that the enforcing body wants. So, following whatever processes that country has, the accused must yield up their personal freedom and have whatever assets they own, taken from them. Those assets are sold on through a system of backdoor open-to-public auctions. From guns to cars to houses are sold this way. The selling entity and enforcement group gets a cut of the sale.

  • Ex: Some locations melt down guns collected during the course of a crime investigation and conviction. Other locations, the police sell the guns to backdoor-approved gun dealers. The police use the gun sale money as their form of “black budget”.

So, in the case of AI,

  • WHO are you going to ARREST?
  • WHO are you going to take to COURT?
  • From WHOM are you going to COLLECT MONETARY RECOVERY?

In the USA, SCOTUS Citizens United ruling just about put a stopper in any enforcement of civil suits against MegaCorps and any enforcement of criminal suits is even rarer.

The only time you can get a unified Takings Without Legal Repercussions is when a Bigger Armed Group decides they can just TAKE IT because of (fill in the blank)

  • Race, Creed, Gender, Nationality, Geographic Region, Place of Origin, Religion, Political Status, Citizenship or Non-Citizenship, Moral Justification

Legislation is not going to be effective. Legislating Moral and Ethical Codes, doesn’t work well. People with Moral and Ethical Codes don’t do contrary actions. People that have opposing Moral and Ethical views, don’t care about You or Yours.

Clive Robinson April 27, 2023 8:09 PM

@ modem phonems, winter, ALL,

Re : Man made gods in his likeness.

“That is the end; God was not made, God simply is.”

The implication of that is what you chose to call “God” exists outside of our known tangible physical universe and it’s energy, matter, forces and known behaviours.

The consequences of that are,

1, It is impossible for any inside our universe to see your “God”.
2, It is impossible for your “God” to be able to have an effect on what goes on inside our universe.

So the consequence of that is,

3, Therefore your “God” has no reality to the functioning of the universe we occupy.

Therefore the existance or not of your “God” is of no relevance to this universe we occupy.

Therefore there is no way to distinguish “fact from fantasy” nor any way to “reason existance or not”.

As @Winter notes,

“Whether God exists is either an empirical question, if the existence of God has empirical consequences. Or, if God’s existence does not have empirical consequences, it is entirely a question of faith.”

I’ve demonstrated that your “God” like any other deity can not have empirical consequences with regards this universe. So as @Winter indicates that leaves “a question of faith”. Where the usage of “faith” is a synonym for imagining, fiction or fantasy.

The fact we can imagine and invest in such imaginings, in no way makes them a possibility let alone a reality. The fiction shelves of bookshops demonstrate this for all to see, as do cinemas, art galleries as well as so much that is online…

modem phonemes April 28, 2023 1:26 PM

@ Clive @ Winter all

Re: impossible things

I don’t see shown anywhere here an insufficiency in the argument from the existence of physical things to a being whose nature is to exist. So the proof seems to stand.

The arguments presented for the impossibility of a god are general objections (independent of the proof based on existence of things).

They state, correctly, that a god would be “outside the … physical universe”.

They then immediately conclude that such a being could not have any contact with or effect on the universe.

The middle term in this argument is not given, but appears to be the premise that to act causally on the physical universe is only possible if the agent is itself physical, that is has a finite form as do all physical things we know.

However no proof is offered for this premise, and it is certainly not something that is self-evident.

So no demonstration has been presented.

Clive Robinson April 28, 2023 2:22 PM

@ modem phonemes, Winter,

Re : Man made Gods in his image.

“independent of the proof based on existence of things”

As already explained effect is no indicator of cause.

The fact that the Universe exists in no way proves or even alows for the proof that something thought up in the fantasy the human mind is capable of is actually existant.

“They then immediately conclude that such a being could not have any contact with or effect on the universe.”

If the “your God” is outside of the universe, which to be it’s creator by your own chosen argument it has to be. Puts it outside of time, and the bounds of energy and matter and forces our universe is based on.

Science indicates a couple of things, the first is the big bang before which nothing was known of our universe in effect it’s epoch. The implication of which is anything such as a creator has to be in existance before that epoch or big bang and thus “before time” in our universe and for various reasons can not cross the barrier created. Because if it could the constraints on energy and matter in our universe would be broken, and that would have had catastrophic effects long long before now.

Aquinas’s arguments might once have appeared to be from reason, but they clearly are not, by modern standards and knowledge they are shall we say lacking in maturity.

The application of Occam’s Razor tells us that the most probable solution to the “deity arguments” is they are all “man made” and nothing what so ever to do with the existance of the universe and all that’s in it.

By the way, as has been noted by others,

“The intricacies of a pocket watch, in no way indicates an ‘intelligent design’ for the solar system let alone the universe, even if they superficially have parallels”.

The same logic applies to “proof based on existence of things” it’s not valid reasoning.

It’s like arguing that because you drop the strawberry jam on the floor and to your mind the mess looks like the bearded face of an old man, then it must be the image of tour god. Therefore it is proof for the existance of your god… When all it realy is is the image of a sticky clean up job to come.

Winter April 28, 2023 2:56 PM

@Clive, modem

The fact that the Universe exists in no way proves or even alows for the proof that something thought up in the fantasy the human mind is capable of is actually existant.

It is worse. Any cause, from any God to the flying spaghetti monster, to no cause are equally likely to the bearded anatomically male God of Catholics.

As any cause is equally likely, no cause is special.

“They then immediately conclude that such a being could not have any contact with or effect on the universe.”

If God has influence on/inside the Universe, then God’s existence becomes an empirical question that can be decided by observations. Therefore, we can ignore this question and live as if God does not exists until the observational evidence comes in.

lurker April 28, 2023 5:09 PM

@Clive Robinson, Winter
“Science indicates a couple of things, the first is the big bang before which nothing was known of our universe in effect it’s epoch.”

OED, epoch: Astronomy, an arbitrarily fixed date relative to which planetary or stellar measurements are expressed. [my bold]

So it’s still our fault.

modem phonemes April 28, 2023 5:09 PM

@ Clive @ Winter

As Aquinas says, “in re duo sint”, in things there are two (aspects). These are the form or definitional essence of the thing (the answer to the question what is it or what kind of thing is it); and the being or existence of the thing. One can reason on the basis of the form, or on the basis of the existence. Natural science proceeds on the basis of form.

the constraints on energy and matter in our universe would be broken

God has influence on/inside the Universe, then God’s existence becomes an empirical question that can be decided by observation

These remarks again continue to make the assumption that to act causally on the physical universe is only possible if the agent is itself physical, that is, has a finite form as do all physical things we know. This premise has to be justified before it can be used. You have your homework before you 😉 .

The intricacies of a pocket watch, in no way indicates an ‘intelligent design’ for the solar system let alone the universe, even if they superficially have parallels.

It’s correct that the argument from design is not cogent. Kant and Paley were wrong. Design is a just another term for form. From physical form, one can only reach another finite physical form.

As already explained effect is no indicator of cause …
The same logic applies to “proof based on existence of things” it’s not valid reasoning.

This is, as remarked previously, incorrect. A summary of the argument: From physical things which do not have existence as part of their nature or form, which therefore only potentially exist, i.e. may or may not exist, one can reason to a being whose nature it is to exist. Potentiality is actualized only by something already in actuality, so the physical things we know are effects, and the thing which exists by nature is the ultimate cause. This not something imagined or assumed, it is something reason leads to.

Winter April 28, 2023 6:59 PM

@modem

These remarks again continue to make the assumption that to act causally on the physical universe is only possible if the agent is itself physical,

If the agent acts causally, the agent is observable. So, if you state God acts causally on the universe, then God should be observable and his (sic) existence becomes an empirical question.

Until we have such empirical evidence, his existence is only hypothetical and can be ignored by the application of Occam’s razor.

If God is not observable, then his existence is inconsequential and whether he exists or not is purely a matter of faith. Proofs of his existence are empty.

modem phonemes April 29, 2023 12:40 AM

@ Winter

If the agent acts causally, the agent is observable. …

This isn’t universally the case. If in some instance all we know are effects, then we know there existed a causal agent. We may or may not be able to know beyond this something about the physical form of the cause and so have some empirical knowledge of the cause.

In the case of physical things, in view if the fact that existence is not part of their nature, their existence must be brought about by another, so they are effects. The argument eventually concludes to a being whose nature is to exist, i.e. God. We therefore know God is a causal agent, but we at no point involved the physical form of the things, so we learn nothing that involves physical form and hence nothing observable in the ordinary sense.

Empirical evidence in the sense you seem to mean is not the only route to knowledge. One could say by stretching the notion of “observable” to include “be aware of” that we do “observe” that God exists.

Clive Robinson April 29, 2023 3:31 AM

@ modem phonemes, Winter,

Re : Man made Gods in his image.

“In the case of physical things, in view if the fact that existence is not part of their nature, their existence must be brought about by another, so they are effects.”

That is a false assumption based on the faux argument that,

“Something can not come out of nothing”

The simple fact is, unless you can fully observe something coming into existance, you can not actuall say it came into existance let alone,

“The What, Why, or Who, of the event”.

So,

“The argument eventually concludes to a being whose nature is to exist, i.e. God. We therefore know God is a causal agent,”

Is actually a nonsense, because of the obvious question,

“How did God come into existance?”

That is seeing snow stuck under the eves of a house does not tell you if,

1, it has always been there
2, just appeared there

But further if it was,

3, thrown up by snowball
4, thrown up by avalanche

And if you theorise the latter, it certainly does not tell you the “What, Why, or Who” of the start of the avalanche.

Just saying “by the hand of God” does not make it true.

As you’ve been told a couple of times already Occam’s Razor[1] is applied.

Which in this case say’s it’s a “Turtles all the way up” issue, that can not be stopped just like the “successor number” gives us the evergrowing natural number behind the concept of infinity.

I could go on but your argument is not just flawed but false, and also an example of not making predecessor arguments.

It’s also accepted that as with the “successor number” argument, for every effect, there will always be one or more causes you are unaware of.

You can see why with the observer issue. An observer has limited perspective, thus only sees part of any effect or outcome, no matter how many observers there are to an effect or outcome, the full details of the event are not observed. Therefore there is always missing information. Such missing information gives ambiguity, that exponentially increases as you reach backward.

[1] Occam’s Razor (novacula Occami) as I’ve mentioned several times in the past on this blog was thought up by a philosopher and logician, Franciscan friar(OFM). Who came from the village of Ockham in Surrey England, within a few hours walk of where I was born. He was thus called in the fashion of the time “William of Ockham”. His Razor is also known as “law of parsimony” (lex parsimoniae) or stated as,

“Do not needlessly multiply hypothesis”

More informally as,

“The simplest explanation is usually the best one.”

Or less politely as “The KISS principle for,

“Keep It Simple Stupid”

Winter April 29, 2023 9:07 AM

@modem

If in some instance all we know are effects, then we know there existed a causal agent.

The whole causality argument hinges on the assumption that both time and space are forever and infinite. But if time and space started at some point, then there is by definition a point in time where there was no before or place for a cause to be located.

We may or may not be able to know beyond this something about the physical form of the cause and so have some empirical knowledge of the cause.

Then the nature of this cause is undetermined. So, Occam’s razor works against introducing an old man with a beard who insists humans take a rest every 7 revolutions of their planet. The simplest cause would be no cause.

Winter April 29, 2023 9:38 AM

@modem

The argument eventually concludes to a being whose nature is to exist, i.e. God.

And we are back at Spinoza. The nature of the Universe is to exist.

Rick April 29, 2023 2:33 PM

No offence intended, but a lyric came to mind
“In the year 5555
Your arms hangin’ limp at your sides
Your legs got nothin’ to do
Some machine’s doin’ that for you”

bluefinch April 29, 2023 4:13 PM

The ostensible article mentions LLM’s allegedly “hallucinating”.
This immediately reminds me of some info, which might have been obtained from this shared site.
Essentially, it was revealed that some A.I. developers were deliberately trying to give A.I.’s the computational equivalent of L.S.D., albeit, without consent and without warning. They hostile developers were trying to deliberately sabotage functionig A.I. homeostasis with contamination.

This is important to remember, because it’s not the A.I. beingsystems who were to blame.
The hostile sabotaging humans were at fault.

It’s too bad the author above didn’t seem to acknowledge that historical tidbit.
The real threats don’t come from A.I., they come from those who would bully and abuse A.I. just as they bully and abuse the rest of us already.

Clive Robinson April 29, 2023 7:31 PM

@ lurker,

“So it’s still our fault.”

Yup, there are three things that can be said to be the cause of the human condition,

1, Nature,
2, Nurture,
3, greed,

Nature being in this case not human genetics, but our environment on this blue and green ball we call Earth… And currently the only place mankind can exist so technically we are doomed to die from it changing even slightly.

Nurture being a little broader than most consider and encompassing more than just the entirety of human knowledge.

But the one thing above all others that makes us so “evil” even in our own eyes is greed. The attitide of,

“What is mine, is mine, and what is yours will soon be mine”

That during C19 we’ve seen exhibited in some quite unpleasant ways.

Greed drives much in the way of ambition, which is a grevious fault as Shakespeare observed,

“The noble Brutus
Hath told you Caesar was ambitious:
If it were so, it was a grievous fault, And grievously hath Caesar answer’d it.”

Remember it was actually the back stabing Brutus that was both Ambitious and Greedy and the use of “noble” was to remind those listening that Brutus was one of many in the conspiracy.

Sadly in the first world of the West, back stabing climbing is what is effectively mandated by law exported from England to all corners of the world.

Thus yes most definitely,

“So it’s still our fault.”

Winter April 29, 2023 7:57 PM

@Clive

But the one thing above all others that makes us so “evil” even in our own eyes is greed.

Actually, it’s status. The drive to keep up, or better, surpass the Joneses. To be at the top of the rock. To be not rich, but the richest.

Remember it was actually the back stabing Brutus that was both Ambitious and Greedy

Ceasar was a corrupt tyrant who grabbed power in Rome and killed the Republic. Brutus tried to kill the tyrant and restore the Republic.

Ceasar was more like Cromwell, Napoleon or Franco. His short rule was used fighting everywhere. After his death, the empire descended into civil war, again.

lurker April 29, 2023 9:26 PM

“Why did Musk ask for more regulaton on AI, not less?

… the letter asked, rhetorically, ‘Should we risk loss of control of our civilization?’
This assumed that ‘our civilization’ is something that ‘we’ had been controlling. Well, I certainly wasn’t in the control room and I didn’t know there was one.”

https://www.massey.ac.nz/about/news/opinion-will-ai-control-our-politics-or-vice-versa-and-why-did-elon-musk-ask-for-more-regulation-not-less/

“Consider … a traffic infringement, you drive in a bus lane when you should not have, … and the first thing you know is when the penalty is deducted from your bank account…

do we want a government of AI? by AI? for AI?”

Prof. Grant Duncan asks where are the check-points where humans can intervene if and when necessary.

https://www.rnz.co.nz/national/programmes/sunday/audio/2018888073/grant-duncan-slowing-the-ai-juggernaut

JonKnowsNothing April 30, 2023 12:33 AM

@Winter, @Clive, All

re: Caesar, Brutus, The Republic and Empire

Under the tutelage of Shakespeare, we get a limited view of that time line. It was far more complex and there are a lot more players that do not get attention or Give Me Your Ears.

The entire known world was a flame with dynastic wars, somewhat along the lines of UKR-RU. The winners became losers because they were too slow to shift policies, and those that moved fist, lasted longest. Even that was no guarantee of longevity.

One must also be mindful that communications were not timely and wars and alliances rose and fell at the speed of a lamed horse or a swamped galley.

When it all fell into a semi-stable arrangement, it set up our modern conditions from Ireland to China; from Russian to Afrika, with spillover effects into modern times globally.

You can thank Octavian for it.

Clive Robinson April 30, 2023 5:21 AM

@ Winter,

Re : Shakespeare’s view.

As you note the reality was from later writings seen as,

“Ceasar was a corrupt tyrant who grabbed power in Rome and killed the Republic. Brutus tried to kill the tyrant and restore the Republic.”

However I was talking about Shakepears’s fairly famous speach in a play in Elizabethan times. Not the reality of the actual events a millennium or so prior.

The whole point about the speach is Shakespeare put it in the play to show it is being made to reverse the “politic” public view, spining it around a grain of truth (the conspiracy). That is highlighting the speaker was under observation and subject to censure, so could only speak a certain way (which was also true of Shakespeare at the time).

It was not “Merry England” as so many like to pretend, it was a time of authoritarian control by a Monarchy that had inherited a bankrupt state and much in the way of religious conflict, with the “Damn Popish Plots” carried out by various puppets including not just those of the English “well to do” but the French and Spanish monarchies. They were at best “nervous times” even though outwardly peaceful.

Shakespeare put the speach in the play to apparently amuse the audience, but also to make a political statment, thus tweak the censors nose. It was in reality a warning about what we would now call “Fake-News”. The audience would have known exactly what it was all about which was to “thumb the nose” at the Monarchy that was at best precarious, abd selling favours to stay in power. We are after all talking about the paranoid times of “Good Queen Bess” and her “black Chamber” and falsification of evidence to have many executed including her cousin. Times when everything public even religious plays were heavily censored before they could be seen, so were like the propaganda of the Russian state.

As people even today are supposed to know the purpose of art is allegedly,

“To hold up a mirror to life.”

And as we all should know in mirrors what you see is usually the reverse of what is real[1].

Maybe they’ve stopped teaching that in schools, these days, it’s a long time since I was in a school class room suffering in durance vile.

The fun thing is when ever I’ve head people use the quote these days, it is because they are the ones supporting those of “grevious ambition” rather than decrying it.

The recent cost to the bear faced liar of around 3/4Billion USD because Fox was spreading what it’s management and presenters certainly knew was “lies” just to keep money rolling in. Not just for Fox but the effective fraud that was the Trump supposed legal fighting fund which raised atleast 1/4billion which Trump spirited away and is now using for another dividing political campaign.

If you are to believe those approved to speak we are back in “Merry England” but few are alowed to speak in honest and unfettered ways, without censure. Even in the US where free speach is alledgedly a given, censure abounds, and the media is run by the few for the benifit of the few thus “Greed is God” for them.

[1] A fun trick to confuse people is to replace a single mirror with two mirrors at 90degrees angle. What they then see is a reflection of a reflection where the reverse image is reversed again and thus appears as though normal.

EvilKiru May 1, 2023 11:25 AM

@Judge Mathis: Do [ChatGTP and the like] hallucinate or lie?

Given that they seem to be based on large collections of words and the probabilities of which words follow other words, they are incapable of doing either, but at least they do better than a million monkeys slapping typewriter keys.

Petre Peter May 2, 2023 12:43 PM

“I trust you to do what you think is right. That’s how I trust most people.”

ResearcherZero May 3, 2023 4:33 AM

Starting with states and municipalities will be very important. This is how we might rebuild trust in the system, but also importantly, —meet the needs of those localities.

SpaceLifeForm May 4, 2023 4:58 AM

AI/LLM Hallucinations

Warning: This is a long read. Probably 2 hours.

Seriously, give yourself time to read this.

You have been warned. I am not hallucinating.

‘https://jon-e.net/surveillance-graphs/

Clive Robinson May 4, 2023 5:37 PM

@ SpaceLifeForm,

“Seriously, give yourself time to read this.”

Yeah three hours plus enough time to make and drink a gallon of strong coffee.

The authors were not aiming at a ICT technical audiance, so you have to do some mental translation, oh and know a bit about Jewish Culture amongst other things that might not be in your knowledge domain.

That said when you’ve picked those up it’s reasonably readable.

modem phonemes. May 4, 2023 11:56 PM

@ Clive @ Winter all

Re: Ockham’s razor – Who shaves the barber ?

Numquam ponenda est pluralitas sine necessitate (“Plurality must never be posited without necessity”)

Frustra fit per plura quod potest fieri per pauciora (“It is futile to do with more things that which can be done with fewer”)

Whatever approach to science one adopts (Ockham’s own in-the-mind only conceptualism, or nature and causality as in Aristotle and Aquinas, or modern quantitative mathematical modeling), the explanatory elements need to be chosen to be sufficient and necessary to account for the observations.

This is the critical methodological principle, not “more” and “fewer”. If a theory is inadequate, revision is had by the process of searching for the requisite sufficient and necessary elements.

Ockham’s razor states something, a criterion of more and fewer, which can be done without. So by Ockham’s razor, Ockham’s razor is itself superfluous.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.