AIs and Fake Comments

This month, the New York state attorney general issued a report on a scheme by “U.S. Companies and Partisans [to] Hack Democracy.” This wasn’t another attempt by Republicans to make it harder for Black people and urban residents to vote. It was a concerted attack on another core element of US democracy ­—the ability of citizens to express their voice to their political representatives. And it was carried out by generating millions of fake comments and fake emails purporting to come from real citizens.

This attack was detected because it was relatively crude. But artificial intelligence technologies are making it possible to generate genuine-seeming comments at scale, drowning out the voices of real citizens in a tidal wave of fake ones.

As political scientists like Paul Pierson have pointed out, what happens between elections is important to democracy. Politicians shape policies and they make laws. And citizens can approve or condemn what politicians are doing, through contacting their representatives or commenting on proposed rules.

That’s what should happen. But as the New York report shows, it often doesn’t. The big telecommunications companies paid millions of dollars to specialist “AstroTurf” companies to generate public comments. These companies then stole people’s names and email addresses from old files and from hacked data dumps and attached them to 8.5 million public comments and half a million letters to members of Congress. All of them said that they supported the corporations’ position on something called “net neutrality,” the idea that telecommunications companies must treat all Internet content equally and not prioritize any company or service. Three AstroTurf companies—Fluent, Opt-Intelligence and React2Media ­—agreed to pay nearly $4 million in fines.

The fakes were crude. Many of them were identical, while others were patchworks of simple textual variations: substituting “Federal Communications Commission” and “FCC” for each other, for example.

Next time, though, we won’t be so lucky. New technologies are about to make it far easier to generate enormous numbers of convincing personalized comments and letters, each with its own word choices, expressive style and pithy examples. The people who create fake grass-roots organizations have always been enthusiastic early adopters of technology, weaponizing letters, faxes, emails and Web comments to manufacture the appearance of public support or public outrage.

Take Generative Pre-trained Transformer 3, or GPT-3, an AI model created by OpenAI, a San Francisco based start-up. With minimal prompting, GPT-3 can generate convincing seeming newspaper articles, résumé cover letters, even Harry Potter fan fiction in the style of Ernest Hemingway. It is trivially easy to use these techniques to compose large numbers of public comments or letters to lawmakers.

OpenAI restricts access to GPT-3, but in a recent experiment, researchers used a different text-generation program to submit 1,000 comments in response to a government request for public input on a Medicaid issue. They all sounded unique, like real people advocating a specific policy position. They fooled the Medicaid.gov administrators, who accepted them as genuine concerns from actual human beings. The researchers subsequently identified the comments and asked for them to be removed, so that no actual policy debate would be unfairly biased. Others won’t be so ethical.

When the floodgates open, democratic speech is in danger of drowning beneath a tide of fake letters and comments, tweets and Facebook posts. The danger isn’t just that fake support can be generated for unpopular positions, as happened with net neutrality. It is that public commentary will be completely discredited. This would be bad news for specialist AstroTurf companies, which would have no business model if there isn’t a public that they can pretend to be representing. But it would empower still further other kinds of lobbyists, who at least can prove that they are who they say they are.

We may have a brief window to shore up the flood walls. The most effective response would be to regulate what UCLA sociologist Edward Walker has described as the “grassroots for hire” industry. Organizations that deliberately fabricate citizen voices shouldn’t just be subject to civil fines, but to criminal penalties. Businesses that hire these organizations should be held liable for failures of oversight. It’s impossible to prove or disprove whether telecommunications companies knew their subcontractors would create bogus citizen voices, but a liability standard would at least give such companies an incentive to find out. This is likely to be politically difficult to put in place, though, since so many powerful actors benefit from the status quo.

This essay was written with Henry Farrell, and previously appeared in the Washington Post.

EDITED TO ADD: CSET published an excellent report on AI-generated partisan content. Short summary: it’s pretty good, and will continue to get better. Renee DeRista has also written about this.

This paper is about a lower-tech version of this threat. Also this.

EDITED TO ADD: Another essay on the same topic.

Posted on May 24, 2021 at 6:20 AM68 Comments

Comments

Chris Dotson May 24, 2021 6:45 AM

Another solution is to issue digital identities to our citizens and have them put their smart cards/NFC/USB device in and push the button to sign their comments. Trusted digital identities issued by the government would solve a lot of problems, although it would probably cause a few too.

echo May 24, 2021 6:54 AM

There’s currently an organised and massive denial of service attack on the UK’s Freedom of Information law and propaganda exercise by pummelling state organisations with fishing expedition FOI’s with perjorative language which have interations with human rights organisations. This is all designed to cause doubt in a fragile political climate or undermine FOI law to the point where a review is likely to remove it or weaken it beyond any usefulness.

At the same time there is concerted media attacks on certain human rights issues as well as vexetious legal actions designed to weaken human rights law or weaken and skew perception of these laws.

All of this is a wedge campaign.

Laws do exist such as the human rights act and equality act and communications acts and Prevent acts. Case law exists to establish the point at which cumulative action crosses the threshold of criminality and I’m fairly sure there is enough law to indicate common cause and bad faith actors. The fraud act may apply in some circumstances when the documents created are improper in any way. I’m not a lawyer nor expert in these areas but I’m fairly sure someone can make something of it if they wanted to so some kind of approach can be constructed.

Because of the national security issues (human rights and access to services and democracy) as well as criminal aspects and sometimes substantial burden of these camapigns not just on the state apparatus but third parties includign human rights organisations and citizens who may be affected by the general action or particular focus of a campaign I feel this places issues within the remit of the National Crime Office. Some hotbutton issues require special protection which is covered by the Public Sector Equality Duty.

My sense is the law and protection mechanisms do exist. The hard part is simply coming up with a codified approach which provides protection while not compromising the essential functions of society.

Certain bad actors definately have a love of new gimmicks among other things while the state can be lumbering and slow or individual organisations and citizens can be disconnected or snowed under. In that respect the bad actors have an advantage but it need not be an advantage which lasts very long.

With regard to AI and electronic campaigns such as mass emails or social media comments there’s always going old school and giving habeas corpus a new lease of life. In other words put down your clipboard and pen and stop gazing at that computer screen and pay attention to the person sitting in front of you. As bad actors go digital go analogue.

Etienne May 24, 2021 7:21 AM

If I was a politician, I wouldn’t have email or a web site.

I’d use my franking privileges to mail my constituents, and tell them to send me a postcard, and keep the postcards in a file.

Insecure email is for Soviets.

Greg May 24, 2021 7:43 AM

What is the result of these types of attacks?

Will politicians be fooled to act against the will of the people and lose the next election?

Tatütata May 24, 2021 9:07 AM

How much longer before someone tries to draft legal (e.g.: SCOTUS) decisions with GPT-3? Dial the ideological cursor, befuddle the reader, apply an adroit sleight of hand, sprinkle a couple of semantic slips and double negations, and voilà!

- May 24, 2021 9:09 AM

@Natanael L:

And I managed to mess up my blog URL

A blog that has been entirely unused for over half a decade…

Begs the question “Why would you link to something that has been effectively abandoned?”

Afterall anyone could do that…

Gary Moore May 24, 2021 9:11 AM

Could a possible solution be to simply make social media sites (Facebook et al) be liable for false information? Seems that Facebook could prevent this (if not entirely – at least mostly) if they really wanted to….

and I suppose that
1, former reader
2, Another Former Reader
3, Yet another former reader

missed the “This essay was written with Henry Farrell” sentence at the end….

AlexT May 24, 2021 9:13 AM

I think someone missed inserting a reference to Russian interference.

More seriously the issue is real bit as others pointed out there are possible mitigations.

Tatütata May 24, 2021 9:15 AM

And I managed to mess up my blog URL

IMO, a pretty useless facility, which on this blog and elsewhere is mostly used by s{p,c}ammers, who post pseudo-comments like “Nice blog you have here”, but insert the real payload as an URL advertising for Nigerian bitcoin-powered virility enhancers or whatever.

Tatütata May 24, 2021 10:12 AM

There’s currently an organised and massive denial of service attack on the UK’s Freedom of Information law and propaganda exercise by pummelling state organisations with fishing expedition FOI’s with perjorative language which have interations with human rights organisations.

Do you have a source for that? And is it AI driven, or human generated?

I heard of such tactics in the US in the last few years, e.g., harassing climate researchers. A contagion of the same phenomenon?

Is this avalanche of requests anonymous and/or pseudonymous?

FOI is more abused by administrations, with exorbitant fee requests, excessive delays, willful misunderstanding, and so on and so forth.

On the continent, authorities try to thwart online FOI portals with tactics such as alleging copyright protection (DE: “Zensurheberrecht”), insisting on a physical address for the requester (eg. DE). In Spain, government supplied FOI portals demand a national ID card (“DNI”) number, and won’t let you go past that point if you can’t supply a valid one, even though the law allows for “any person” to make a request. Frequent requesters can thus be identified and flagged, and the system is in practice unavailable to foreigners.

The future of FOI in post-Brexit Britain is already uncertain. What is the status of the EU’s Freedom of access to information directive (2003/4/EC)? It is an implementation of an international treaty, the Aarhus convention, and (was?) part of UK law as the Environmental Information Regulations 2004.

https://en.wikipedia.org/wiki/Environmental_Information_Regulations_2004

This directive is much stronger than most national statutes, but applies only to environment-related information.

My most recent FOI UK request (~3y) was a failure. The requested document was provided, but with all parts not narrowly related to the specific query language were redacted out (that’s about 80%+ of black). There is no legal basis for this, and I could/should have pointed out ICO guidance explicitly prohibiting this practice, but I was disgusted at that point, and gave up. Sirs Arnold and Humphrey grin from their graves.

UK government departments must file annual reports with Parliament. Many of them are jokes, and of the level of a coloring book, with plenty of irrelevant details, but nary information about the actual operation and results of the administration.

Clive Robinson May 24, 2021 10:56 AM

@ Tatütata,

Nigerian bitcoin-powered virility enhancers

+1, I hope you’ve registered that design idea 😉

Though the question arises about where you would put the slot for the BitCoin to be inserted 0:)

Sean May 24, 2021 10:56 AM

Maybe there’s a case where such outrage would be justified but net neutrality is clearly unconstitutional in the first place.

Second, why would the question of private property rights be open to interpretation or commenting – as if it was a fashion statement that depends or someone’s opinion this or that week?

Third, the Democrats control most of mainstream media and perform subtle spamming on a daily basis.

The bottom line is it doesn’t matter what the comments say and whether they are or aren’t fake.

Robin May 24, 2021 11:17 AM

@Greg: “Will politicians be fooled to act against the will of the people and lose the next election?”

Or will they employ their own AI commenters to demonstrate massive public support for their pet projects?

Bet they do that already.

Impossibly Stupid May 24, 2021 11:40 AM

making it possible to generate . . . comments at scale, drowning out the voices of real citizens in a tidal wave of fake ones.

We need to address that problem instead of framing this as a problem with “AI” (which, as I have noted before, this current crop of machine learning algorithms should not be conflated with). This is very directly related to topics like electronic voting, because it is yet another way the voice of the people can be heard. So whatever process validates public comments should work regardless of their content; it should even be effective for simple yes/no polls.

These companies then stole people?s names and email addresses from old files and from hacked data dumps and attached them to 8.5 million public comments and half a million letters to members of Congress.

That is a serious amount of identity theft at a minimum, and quite possibly a massive human rights violation. It goes so directly to subverting democracy that it might rise to the level of treason. Officers of these companies should be charged with capital crimes.

The danger . . . is that public commentary will be completely discredited.

In many corners of the Internet, it already is. Your blog itself is an interesting case study, Bruce, because it still manages to curate mostly quality comments despite allowing mostly anonymous posting. I wonder what mechanisms you use and could stand to improve upon in order to keep other forms of public commenting safe from manipulation at scale.

@Chris Dotson

Trusted digital identities issued by the government would solve a lot of problems, although it would probably cause a few too.

Well, for starters, they should simply verify the identifying info they’re already getting! I can’t go to a podunk web site and set up an account without my email address being checked, yet the federal/state government isn’t doing that basic of a challenge-response test for the submissions it receives? Makes me wonder if there isn’t corruption at a deeper level that needs to be addressed.

@Tatütata

IMO, a pretty useless facility, which on this blog and elsewhere is mostly used by s{p,c}ammers

While that is true in general, I happen to use it to verify that my identity is linked to my comments here. A copy of most “public commentary” I make is on my blog, and I can give a specific URL to verify that I’ve actually said what this blog’s comment section says I said. That’s another solution to the problem being discussed, making fakes a pointless endeavor regardless of how sophisticated their generation may be.

wumpus May 24, 2021 2:08 PM

@Robin “Or will they employ their own AI commenters to demonstrate massive public support for their pet projects?”

You can buy as many twitter followers and other “likers” already. Getting marginally appropriate responses shouldn’t be too hard.

echo May 24, 2021 3:21 PM

@Tatütata

There’s a lot of cross fertilisation between UK and US far right. Whether it’s legal action, abuse of public office, abuse of media jobs; use of mass lobbying campaigns either direct with politicians or social media and of named or anonymous status; use of personal connections and in some cases celebrity status. Some of the material is outright lies and provably so for a long time but there is a propoganda and recruiting element too. They exploit complacency, imbalance of resources, and in some cases use the default camara shy secrecy of some state workers to gain ground by default.

UKn security services have publicly admitted they were slow on the far right threat and have also admitted that the far right have infiltrated state institutions. I personally use a broader definition than the usual cartoon figures. The far right threat isn’t just the usual cartoon figures but people with attitudes and points of view which can be politically exploited.

There is a government agency which I won’t name which made a decision which was very odd at best and gave special treatment to a certain shady far right organisation. People have already put their FOI’s in and there is talk of a judicial review rather than relying on firefighting with the usual after the event complaints.

The advice I received off a former civil servant about FOI’s is to be specific. I’ve personally made FOI’s and Subject Data Access Requests of Parliament itself and been punked by staff. This and everything surrounding it is in addition to previous actions I was advised would attract an ECJ case. It’s all very curious. The most useful information wasn’t what information they returned but what they did and didn’t do, and what they chose to ignore or not ignore. There are some whoppers in there. This is all logged. I have also logged publicly recorded events surrounding conduct of staff and members of the house which provide extra context, as well as samples of behaviour of members of the house in committee and individual MPs on various issues. I’ve also logged reports of the various complaints systems in place and how they have been used and abused. It’s all a bit of a mess but shows parliament up on a functional level as amateur hour and in breach of various laws and legal obligations as well as Convention rights. This awaits the attention of lawyers as and when I get around to securing one.

Some people might think I was a complete clown for not getting what I wanted done. They would likely be correct. Other people might say it was an effective social engineering attack for coming away with more of the kind of data about their skills and attitudes and behaviours they wouldn’t want to admit. They may be right about that too. They even exfiltrated it for me. How nice of them!

If you take a step back you can see how this all works in tandem. Multiple attacks are launched via multiple stooges on a continuous rolling basis abusing trust and loopholes (whether real or perceived) and attitudes to create newsworthy items and trigger compliance investigations and so on and so forth. It’s designed to weaken and wear down and influence.

The UK government is very shady both in its public agenda and how it goes about implementing change some of which is a policy tweak here or an oversight there, outright breaking of law or evading of law, hoping nobody will notice.

In addition to FOI’s being dodged blanket bans are unlawful as are rights in law which only exist in theory not practice. There is a higher expectation of performance when it comes to Convention rights.

echo May 24, 2021 5:39 PM

Picking up on the AI component versus astroturfing and variants of same it really comes down to real people versus deepfakes. That is an issue but I think a lot of law already exists to cover this as well as various verifications and investigation methods. A lot of loopholes aren’t really loopholes as such but more relying on poor administrative and poor decision skills and lack of policy or inadequate resources and various parts of the system not joining the dots.

The AI component can play a role in dodging quality control at the perimeter and scaling so systems which simply weigh the data can be skewed.

It’s more difficult to dodge old school on the ground assessment by real people meeting real people, and academic studies, and surveys. It’s also difficult to dodge a track record. While an act first apologise later system can ruin lives and be expensive in reversing bad decisions forensics is also getting a little smarter plus the human rights aspect can introduce room to pause for thought.

It’s a bold thing to say but I suspect within a few years this whole problem area will be seen to be a flash in the pan. Once you see the con or know how the trick works it loses it power.

There is value in reputable people and institutions and methods. No it’s not always the ones with the highest status or the biggest size. Independents or self-help groups have legitimacy too. The erosion of this and the erosion of perceptions by bad actors very very nearly pulled it off but they didn’t pull it off. We know who they are. We know what they are up to. They have revealed their hand and are unmasked. Surely, this is their error? Expensive lawyers and secretive tax haven accounts and lurking behind fine print can only keep them from disgrace and ruin and out of jail for so long.

That’s my theory!

Weather May 25, 2021 1:51 AM

@thingamyjik
You got me wrong, I’ll buy your next book with a cypher for name,see you then..think…

StillGodTheBlues May 25, 2021 3:33 AM

I think this is great because it will force politicians to leave their cozy bureaus and couches and get outside and talk to real people in the real world. The real people will have to do that, too – of course.

It could be lit for everyone to have some ephemeral conversation.

I have hopes – many people are currently tired by “social” media in this pandemic.

Ismar May 25, 2021 5:07 AM

It is a worrisome trend as we seem to be much better at misusing these powerful technologies than putting them to beneficial use.
How is this going to play out as more and more interested parties adopt similar techniques of manipulation is anybody’s guess but it is not looking very promising at all.
A solution- making decisions based on precious little trusted sources left and using time proven analysis to test the information against.
To this end I find the work by David Robert Grimes – physicists , cancer researcher and a journalist , a very good source of education on how to navigate the modern (mis)information landscape.

Gert-Jan May 25, 2021 6:43 AM

This is likely to be politically difficult to put in place, though, since so many powerful actors benefit from the status quo.

They don’t realize how the use of this automated robo-comment stuff can grow exponentially in a short timespan. When it causes serious harm (basically disabling the entire online discussion) it is too late to start with new laws to make it illegal. Such dramatic situation will affect these politians too.

I don’t want a mandatory true identity system with strict authentication, which undoubtedly will be suggested as the “solution” once we’re in that situation. This means that the current practice has to be combatted.

I must admit, it can be hard to specify exactly what should be outlawed. Going after identity theft is all good and nice, but that is not what it’s about. It’s about secretly claiming to be multiple identities that is at the heart of it.

Winter May 25, 2021 7:08 AM

@Gert-Jan, All
“don’t want a mandatory true identity system with strict authentication, which undoubtedly will be suggested as the “solution” once we’re in that situation.”

In a certain sense, this is a self-inflicted American problem. Only in the USA, companies have Free Speech rights. In the rest of the world, they don’t and companies who even pay for AstroTurfing would be liable for fraud. This moronic “personhood” principle makes handling companies misrepresenting messages more difficult.

But basically, what this boils down to is fraud. Companies sending in fraudulent comments misrepresenting their origin. That is easily a criminal offense. Those who ordered it are criminals and should be treated as such. Those paying them should be treated as people paying for organizing crimes.

There is not yet a need for an online identity system, just a system to actually prosecute online fraud. Starting with those who paid for this service.

echo May 25, 2021 8:29 AM

@Winter

In a certain sense, this is a self-inflicted American problem. Only in the USA, companies have Free Speech rights. In the rest of the world, they don’t and companies who even pay for AstroTurfing would be liable for fraud. This moronic “personhood” principle makes handling companies misrepresenting messages more difficult.

But basically, what this boils down to is fraud. Companies sending in fraudulent comments misrepresenting their origin. That is easily a criminal offense. Those who ordered it are criminals and should be treated as such. Those paying them should be treated as people paying for organizing crimes.

A fair few English seem to be getting the idea that free speech means they have the right to say anything they like. The right wing press and current Tory regime like to encourage this. It creates a lot of problems.

But you are correct. In Europe (which includes the UK) free speech is a qualified right. It’s not a right to trample over human rights or equality or harassment law or commit fraud or ignore anti-terrorist law or computer misuse or communications law.

It’s a similar thing with copyright. Some Americans (and English who have picked up bad habits) think that just because they see it they can grab it. Not so. I’ve done some relatively deep reading on this for someone who is not a lawyer. Even if the first few provisions are easily met there are considerations and processes which must be gone through before reaching a decision to use a work under fair use. Even if it is fair use skipping this renders the use unlawful and, yes, people have got done for this.

Even if a platform is American based and waves around “free speech” they may still be within the reach of the courts of a foreign jurisdiction. Oops. Different jurisdiction, different laws. If the platform has any presence in the jurisdiction it places itself at legal risk. A majority of American platforms default to an American view of law including T&C’s which holds zero force in another jurisdiction. Most tend to default to complaints processes which are from the American viewpoint and demand this that and the other with respect to who has jurisidction and how it will be settled but, again, this holds zero force in other jurisdictions. The fact most people don’t have the respources to mount a legal challenge or perhaps don’t realise they can is neither here nor there. There’s nothing stopping, say, police prosecuting or an individual bringing a civil case (or even a private criminal case) via the courts.

Well, that set the cat among the pigeons.

Gert-Jan May 25, 2021 10:41 AM

@winter
Companies sending in fraudulent comments misrepresenting their origin

When say Nike posts a comment as say BurgerKing, sure, it’s misrepresentation. Court case, slam dunk.

But comments by say “DarkOx” or “dskoll”, backed by a public mailbox account, who is supposed to represent those? Or not represent those?

If they are posted by “volunteers” who happen to “help” a company in a chain of companies, I don’t think such case can be made.

For any company, now and in the future, it will be legal to ask the general public to give their opinion. And probably also to fund some organisation to “increase participation” in such discussions.

In other words, that approach is fine for some cases, but not enough.

Winter May 25, 2021 11:59 AM

@Gert-Jan
“If they are posted by “volunteers” who happen to “help” a company in a chain of companies, I don’t think such case can be made.”

Outside of the USA, paid for, commercial, speech is considered advertising. Not divulging payment is easily labeled fraud. I would expect falsely submitting AstroTurfing opinions as grassroots opinions to be considered criminal behavior. Doing so to official governmental offices or political representatives could easily be labeled aggregating circumstances.

Steven May 25, 2021 12:59 PM

It is interesting to see when fact catches up with speculative fiction. In Neal Stephenson’s book Anathem he talks about this exact situation. He called the technology described in Bruce’s article “artificial inanity”. An apropos description if there ever was one. Bruce has talked at length, on this blog and in books, about trust being a core feature of any human-to-human interaction. Since the internet acts as a proxy for this interaction it causes the issue described above.

In the book Neal describes that the result was a ‘dark age’ where you couldn’t believe anything posted anywhere. I think we are in this situation now or about to be. It wasn’t all dystopia though, Neal offered his version of “a way out”. His solution was an automated reputation filter. Any information posted also had a reputation score. The score offered the reader a way to measure the confidence of the information presented. Not a perfect solution, to be sure. It will quickly turn into an arms race. Then again, that is what the endgame of the article is about; weaponizing disinformation.

Reading the comments above, before I post this, I see a lot of complaining but no proposals for a solution. I like Neal’s idea. Perhaps we need to invest in automated reputation/confidence systems as a countermeasure? There are minds on this blog far more experienced than mine. What are your solutions?

SpaceLifeForm May 25, 2021 4:42 PM

@ ALL, Clive

This is not a hard problem to solve, it’s just a PITA problem to solve.

HSM, PublicKey/PrivateKey pair. Enroll handle/nic/nym using PublicKey and a signed challenge.

Server knows PublicKey and handle/nic/nym.

Server can validate signed challenge.

The fakers will not have the PrivateKey.
Login or post, must be signed.

No outside party.

The hassle is to get the challenge to the HSM and get the signature back out.

Securely.

Clive Robinson May 25, 2021 6:09 PM

@ SpaceLifeForm, ALL,

HSM, PublicKey/PrivateKey pair. Enroll handle/nic/nym using PublicKey and a signed challenge.

As you note it’s not difficult just a PITA.

It’s also not a new idea as such, the first time I saw someone talking about it with regards to “blog posting and reputation” was @iang (Ian Grigg https://iang.org ) over at Financial Cryptography blog ( http://financialcryptography.com/ ) more than a decade back.

But I don’t think the basic idea of a signed communications for public posting was original to @iang, though I don’t remember seeing anyone use it as part of a “reputational system” prior to @iang suggesting it.

To be honest the last time I looked at the FC blog was a few years back, because it had issues with https keys that I don’t remember being resolved.

echo May 25, 2021 7:03 PM

@Steven

Reading the comments above, before I post this, I see a lot of complaining but no proposals for a solution. I like Neal’s idea. Perhaps we need to invest in automated reputation/confidence systems as a countermeasure? There are minds on this blog far more experienced than mine. What are your solutions?

I’m glad someone asked this question. It was rather hanging out there.

As well as the law which on second look is much better at resolving issues before a runaway event occurs I have looked into trust systems before. I can’t remember when now if probably around or over a decade ago. From other comments it looks like I wasn’t the only person pondering this and it seems there are very few original solutions. I haven’t thought too deeply about it but imagine a system where you could have a hierarchy of real id and anonymous ids and aliases. Everything would hang on your chosen root but it would allow for identities based in a real verified id or an anonymous id. (Like these dreaded NPT’s an anoymous root would have value.) You could have an array of permission flags to moderate disclosure. Websites or equivalent receiver could moderate on those flags too. You can add a reputation system or other metdata. I suppose like schemes for “bit level” control of the internet you can have it switched on or off depending on the situation. (I pushed this before without thinking it through. It turned out Microsoft Research had produced a white paper on the topic which the UK government seized like like an alcoholic siezes on a whiskey bottle and Microsoft was wanting to peddle some kind of technical solution. It surfaced again a few years ago before diappearing again and I have heard nothing since.) Before anyone point out “Ah, but” people hiding behind an anonymous alias can be libelled and sue for libel if certain conditions are met. Basically, is it an established alias with value or reputation. There’s also the issue that in law it may also count as private property much like IP address is at the time you use it.

I’m too lazy to think it through but it can be as simple or complex as you want. Needless to say it’s more than just a technical issue and requires other experts to weigh in on legal and political aspects.

For a lot of people complaining about privacy 99% of this is already implemented in one form or another. It’s just not expressly codified as such.

One thing I’ve learned about politicians is to be careful what you say. There’s times when it’s useful as they toddle off and get something done without you having to lift a finger. There’s other times when you wished you had said nothing. Not that I’m necessarily more careful today but then as nobody pays attention to anything I say anymore it’s a moot point.

Clive Robinson May 25, 2021 9:48 PM

@ echo, Steven, ALL,

The first question you should realy ask when talking about any authorisation system –which a reputation system is– is

“Roles”

I’ve mentioned it a few times in the past. Whilst idiots in power want centralized systems, they are as our host @Bruce has polited out in the past,

“A disaster waiting to happen, due to the ‘all the eggs in one basket’ design of such systems”.

It’s also fairly clearly not how humans work either.

Your reputation is different for all the different roles you have in life. So there is your immediate family roles as daughter, son, brother, sister, father, mother wife, husband, etc. Then there is employer/employee and the myriad of customer/supplier roles and any social roles etc.

After a few seconds thought you should realize that whilst you are common to all your roles the other half of any of your role has no relation to any other role nor should it. That is your reputation as the kniting circle club president has no relation to your personal finances or any employer or other proffessional role.

Thus you need to view the reputation role the other way around. That is it should be fully decentralized and thus easy to make anonymous.

So you’ve elected to have a user name on this blog of “echo” or “steven” etc, they are “handles” not your real names. However as far as our host is concerned he does not care who you are, just your history of posting. Thus the more unproblamatic posts you make the higher your “Schneier-blog reputation score” is. If you make a few questionable comments your reputation score decreases. Arguably your reputation should rise linearly but fall by a some non linear method. So as in life building up a good reputation for your Schneier-blog role takes time and effort, but making even one faux par could immediately halve it or worse with each successive faux par halving your remaining reputation score.

Your reputation on our hosts blog is controled by the blog hosts view of your behaviour, not anybody else, after all it is kind of his front room. Because this is all that actually matters in this particular role it can and should remain issolated from other roles. Importantly as it can be effectively anonymous it has no effect on any other role in your life as it realy should not.

It is actually that simple, it’s idiots in power that make these sorts of things not just complicated but centralized, because they want not just to put your entire life under a microscope, they want the ability to build you up or smash you down as they see fit, with no right of appeal, or any oversight on them.

It’s one of the reasons I think “single sign on”(SSO) systems not fully under the individuals control is in effect a breach of their human rights. But that as they say is a discussion for another day.

Erdem Memisyazici May 25, 2021 11:49 PM

“Organizations that deliberately fabricate citizen voices shouldn’t just be subject to civil fines, but to criminal penalties.”

This is why newer versions of these types of “services” have gone underground. They simply don’t admit to working for anyone and are paid in cash. People even hire strangers on the spot.

I’ve personally seen one man protest about the $20 he received to “get on the boat and be loud” which of course isn’t supposed to happen but it shows how some operate like they’re selling stolen watches.

Impossibly Stupid May 25, 2021 11:58 PM

@Steven

Reading the comments above, before I post this, I see a lot of complaining but no proposals for a solution.

You missed mine: a federated approach. My public comments are on my blog, nicely hashed for identification and immutability, and then copied to this site (and others). In a more perfect world I would just be able to link to my original comment, but it’s only a little extra work on my part.

@Clive Robinson

Your reputation is different for all the different roles you have in life.

This seems to be less and less true, or perhaps has never been true. If you’re a jerk to a waiter (or whatever) the only way that doesn’t get you the reputation of being a jerk in general is when you’re chumming around with other people who think that being a jerk to waiters is a good thing (i.e., other jerks). Compartmentalized reputation has allowed an awful lot of awful people to keep doing awful things.

After a few seconds thought you should realize that whilst you are common to all your roles the other half of any of your role has no relation to any other role nor should it.

This is where you go off the rails. Your reputation does and often should carry across your roles. If you decide your job description includes kneeling on a guy’s neck until he dies, maybe your wife should decide to divorce you. Maybe your “non-role” reputation with people you haven’t yet or won’t ever meet should be in the gutter. You need to put in more than a few seconds of thought into this.

Importantly as it can be effectively anonymous it has no effect on any other role in your life as it realy should not.

But it really should. It’s especially important when it comes to the political matters that Bruce initially discusses, because the reputation you bring to a new discussion/issue matters a great deal (e.g., are you a registered voter with standing regarding the laws in question?). Even when it comes to his blog comments, it should be easy to see that if some newcomer could link their anonymous identity with the same anonymous identity that posted quality comments on another site, there is a “web of trust” value in that which could add to their reputation here.

SpaceLifeForm May 26, 2021 12:51 AM

@ impossiblystupid

My public comments are on my blog, nicely hashed for identification and immutability, and then copied to this site (and others).

Immutability?

Ok, here is an experiment you can try. Put a signature block after your text.

Then, do the copy. Then later, take the text you see on a site, and verify the signature by copying back the text.

Put some single and double quotes in your text. Or maybe an ampersand. Comma, double dash, throw in a greater than sign.

Even when it comes to his blog comments, it should be easy to see that if some newcomer could link their anonymous identity with the same anonymous identity that posted quality comments on another site, there is a “web of trust” value in that which could add to their reputation here.

No. That is exactly what troll farms do.

SpaceLifeForm May 26, 2021 1:47 AM

@ impossiblystupid

re troll farms

Note that it could lead to a “web of mistrust” also.

The key is having a signature. It forces a ‘cost’ on the troll farm. It also makes it easier to identify the players, even if via a handle/nic/nym. At least you know how many you are dealing with, which is critical intel in and of itself. Even if it is a one-man troll farm, the internet traffic will point to that. Few exist.

Perezas May 26, 2021 2:08 AM

To fake comments is an easy task for AI. But the Protasevich video released by the Bielorussian goverment is a Deepfake AI video.
The future has arrived, not just written comments, but audio and video can be faked by an AI.
And public opinion is easier to deceive than ever before.

Winter May 26, 2021 2:21 AM

@Perezas
“Protasevich video released by the Bielorussian goverment is a Deepfake AI video.”

Unlikely, his father was explaining he saw his son’s nose was broken and the left side of his face was covered in makeup.

Perezas May 26, 2021 3:23 AM

@Winter
The Deepfake video can deceive even his parents.
You just need 40 seconds to create one Deepfake video.
Watch this video for examples on what can be achieved:
https://youtu.be/iXqLTJFTUGc
Any Protasevich word can be easily edited.

Winter May 26, 2021 3:39 AM

@Perezas
“The Deepfake video can deceive even his parents.”

A deep fake video released by Belarus to show they tried to hide that he was tortured and botched it?

Sounds too stupid even for Belarus.

Perezas May 26, 2021 3:57 AM

@Winter
Well, to hijack a Ryanair flight in plain sight doesn’t sound very clever

Winter May 26, 2021 4:15 AM

@Perezas
“Well, to hijack a Ryanair flight in plain sight doesn’t sound very clever”

What are you trying to say? That Belarus did not torture Protasevich but made a deep fake video to make it look like they tortured him?

I go the more simple route that they simply did torture him and forced him to confess and then botched the coverup.

Perezas May 26, 2021 4:27 AM

@Winter
No.
I think they faked a video to soften the reaction against them.

Cyber Hodza May 26, 2021 7:14 AM

All of the discussions so far assume that, indeed, our western democracies are driven by the popular demand.
This, however, is not (entirely) true as plethora of lobbyists play a major role in the selection and election of “our” elected representatives as well as influencing their decision making processes thereafter. In addition, lobbyists are never anonymous to their targets as their expectations of subsequent reciprocity require them not to be so.

So if you want to influence how your democracy is run (for better or worse) you’d better focus your efforts on the lobbying as it produces much better results than any public discourse can.

Trying to design solutions which would allow some kind of fair allocation of influence based solely on what most of us here would call sound and just solutions is a must harder undertaking that I am yet to be convinced is possible to achieve in practice.

Clive Robinson May 26, 2021 8:06 AM

@ Cyber Hodza,

Trying to design solutions which would allow some kind of fair allocation of influence based solely on what most of us here would call sound and just solutions is a must harder undertaking that I am yet to be convinced is possible to achieve in practice.

Shall I convince you of the opposit?

It is a well established principle that,

“What one man can make, another man can unmake or duplicate”

All it needs is a little time.

The process of making effective legislation thay is focused and not overly broad in scope, or full of unintended consequences take a considerable time.

Thus those that would be regulated have plenty of time to come up with new methods of operating to get around any proposed or enacted legislation.

The alternative is realy realy bad legislation which has scope sufficient to swallow any that it is pointed at…

So damned if we do and damned if we don’t…

Sorry, but either way we don’t get what we want.

And that’s before we start talking about the pace of technological change…

Clive Robinson May 26, 2021 9:43 AM

@ Winter,

Consider entrenched “cognative bias” is rarely persuaded otherwise by “reason”.

As we have recently seen here, also the Upton Sinclair observation about peoples salaries and what they proffess to believe being based on them also applies…

Winter May 26, 2021 9:53 AM

@Clive
“Consider entrenched “cognative bias” is rarely persuaded otherwise by “reason”.”

I have long, long ago lost any expectation that “reason” can convince people I converse with on the internet. I cherish reason when I encounter it, but I never expect it.

However, it might be worthwhile to make it abundantly clear to other readers when commenters have left reason by the wayside. With one response, there might still be doubts, after half a dozen, any doubts will have been put to rest.

ADFGVX May 26, 2021 11:02 AM

@echo

Oh for God’s sake stop that passive anger sexism. Women can do work as well you know so knock it off with the assumption of male by default.

You’re making too much money off your femaleness there. You already “assume” dudes are so poor they can’t make their mortgage payments, and you’re forcing them to take the politically correct “or-she” in the butt just to rub it in their faces with fictitious divorce proceedings.

MarkH May 26, 2021 11:06 AM

@Clive, Winter:

I read a formulation that was very helpful to my understanding:

“If a belief is not based on facts and logic, why expect that facts and logic can alter that belief?”

Everybody does activities involving confrontation with concrete realities in which we test what we think is true (like what cleans crud off of dishes).

In matters which are less experiential, or more ambiguous of interpretation, I realized many years ago that the beliefs of most people are governed by prejudice, emotion, and to a lesser degree by authority.

If a claim (a) fits their conceptions of the world, and (b) provides some emotional satisfaction … then by Jove, that’s the truth!

Critical thinking is very hard work, and often leads to painful conclusions. That’s a “tough sell.”

Impossibly Stupid May 26, 2021 11:44 AM

@SpaceLifeForm

Immutability?

Do you not know what content hashing is? I could refer you to a file on my site that is my original comment on this post (82d7…, to be specific). You can easily verify that the file hash matches the file name, and that a hash of the comment text matches the hash in the file. None of the content can change without the hashes changing as well.

Ok, here is an experiment you can try.

I’m not sure what you’re trying to get at; please be specific on what you think the problem is. When you say things like “take the text you see on a site”, you fundamentally don’t seem to understand the process of verifying the content of a comment. The comment here is not the original. Yes, there are potentially text encoding issues as well as changes that happen when Markdown is converted to HTML. None of that matters one whit, though, because I can refer to the comment I wrote on my site, which can be verified to be materially identical to the comment here (i.e., nobody changed either the syntax or semantics).

That is exactly what troll farms do.

Again, this makes me question if you fundamentally understand what a web of trust is. Why on earth would Bruce or anyone else place their trust in troll farms (or anyone that puts their trust in troll farms)? My point remains that communication metadata can be used to track and (pseudo-anonymously) identify users across sites. Big social media players already do this, but that power in the hands of individuals could actually be a good thing.

Clive Robinson May 26, 2021 12:27 PM

@ echo,

Quit the passive agressive s/he, I try to use “they” to be gender neutral, or both genders as can be clearly seen when I was talking about roles. But gender neutrality is clumsy at the best of time. Which is why most UK “writing style guides” say “do not do it”. Also the male gender is built into many accepted scientific and technical terms it is “Mankind” not “Theykind”, “Personkind”, or “Womankind”. The latter interestingly is very much gender specific in useage, which is why you will only very rarely using.

As for,

I’ve outlined a system which optionally be rooted in a real id,

As I pointed out but I will reiterate,

1, A “real ID” is undesirable in society, it’s not the way humans work in societal behaviour.

2, Those who push a “real ID” almost always have an ulteria motive, that is generaly not at all good for society.

3, The use of a “real ID” is not just an “all eggs in one basket” it is also a single point of failure so lacks resiliance. It also gives a significant ROI to attackers, who only have to find one vulnerability in one system, rather than different vulnerabiliries in numerous disparate systems, which makes it a “Class Break” not a break by “multiple instances”.

4, Then there is “liability issues” data especially PII such as a “real ID” is not just “toxic” to the person who’s ID it is, it also drags in significant legal liability for the person holding any “real ID’s”.

In fact the only real reason to hold a persons “real ID” is to exert some form of control over them. In many cases this is highly undesirable as the Chinese Social Credit Score system is making clear.

Thus the reasons to design a system that does not have PII holding capability far out weigh those systems where it is “optional”.

Also remember something “optional” can be “forced on” by third parties and US legislation has already gone down the compulsion route with PII holding.

So I disagree with your findings.

vas pup May 26, 2021 5:08 PM

Tag – AI
AI emotion-detection software tested on Uyghurs

https://www.bbc.com/news/technology-57101248

“A camera system that uses AI and facial recognition intended to reveal states of emotion has been tested on Uyghurs in Xinjiang, the BBC has been told.

A software engineer claimed to have installed such systems in police stations in the province.

The software engineer agreed to talk to the BBC’s Panorama program under condition of anonymity, because he fears for his safety. The company he worked for is also not being revealed.

But he showed Panorama five photographs of Uyghur detainees who he claimed had had the emotion recognition system tested on them.

“The Chinese government use Uyghurs as test subjects for various experiments just like rats are used in laboratories,” he said.

And he outlined his role in installing the cameras in police stations in the province: “We placed the emotion detection camera 3m from the subject. It is similar to a lie detector but far more advanced technology.”

He said officers used “restraint chairs” which are widely installed in police stations across China.

“Your wrists are locked in place by metal restraints, and [the] same applies to your ankles.”

He provided evidence of how the AI system is trained to detect and analyze even minute changes in facial expressions and skin pores.

According to his claims, the software ==>creates a pie chart, with the red segment representing a negative or anxious state of mind.

==>He claimed the software was intended for “pre-judgement without any credible evidence”.

Read the whole article if interested in details.

Cyber Hodza May 26, 2021 5:29 PM

@Clive
I think we both can agree that I am still not convinced.
But, focusing on what matters more here, is that, despite how it looks at first sight, this trend is not as important as it seems.

The biggest danger is probably the fact some have started talking how to prevent it from happening but introducing yet more control and surveillance measures.

This fact alone should be a red flag that the whole episode could be, indeed, instigated by those who are now proposing additional strengthening of the Big Brother states.

PM Thompson May 26, 2021 10:19 PM

The big telecommunications companies paid millions of dollars to specialist “AstroTurf” companies to generate public comments. These companies then stole people’s names and email addresses from old files and from hacked data dumps and attached them to 8.5 million public comments and half a million letters to members of Congress

PM Thompson May 26, 2021 10:24 PM

The big telecommunications companies paid millions of dollars to specialist “AstroTurf” companies to generate public comments. These companies then stole people’s names and email addresses from old files and from hacked data dumps and attached them to 8.5 million public comments and half a million letters to members of Congress

Sorry, I sinply cannot believe that any company would do this. This sounds like fraud or worse on a massive scale. If a company like Comcast or Verizon or AT&T were to truly do this there would be severe repercussions.

Articles of Uncooperation May 26, 2021 11:25 PM

@PM Thompson,

#1, see patent troll corporate structure
#2, see various small NPOs
#3, legally dubious area
#4, no corpus dilecti

#5, complainants

Thoughtfully, not for profit but for phun.

Now let’s game this scenario,

Entity A speaks to entity B about lobbying entity C. Entity A consults entity D about starting a small 2 entity entity for the purposes outlined in the conversation between entity A and entity B. Entity D creates entity E with entity A and money from entity B.

Entity E spends 10% of entity Bs investment on spider webs for entity C.

Entity C complains to entity F about cobbled webs being placed by entity E.

Entity F fines entity E to the tune of 20% of entity Es fitness.

Entity A and entity D divide the remaining carcass amongst themselves.

Articles of Uncooperation May 26, 2021 11:32 PM

@PM Thompson

Sorry, I almost forgot

Entity A and Entity D, before dividing up the carcass of entity E make sure to calculate Entity Fs share of tax revenue.

For phun AND profit.

Winter May 27, 2021 12:44 AM

@PM Thompson
“Sorry, I sinply cannot believe that any company would do this. ”

I advice you to use the [sarcasm] tag. Unfortunately, there are too often commenters that do not seem to get even the most obvious sarcasm.

vas pup May 27, 2021 3:28 PM

Privacy activists challenge Clearview AI in EU
https://www.dw.com/en/privacy-activists-challenge-clearview-ai-in-eu/a-57691756

“European privacy groups accuse the facial scan company of stockpiling biometric data on billions of people without their permission. The firm’s database contains images “scraped” from websites, including social media.

The campaigners allege that Clearview doesn’t have any legal basis for collecting and processing biometric data under the European Union’s General Data Protection Regulation, which covers facial images.

“Just because something is ‘online’ does not mean it is fair game to be appropriated by others in any which way they want to — neither morally nor legally,” said Alan Dahi, a data protection lawyer at Austrian privacy group Noyb.

The image stockpile was first reported last year by The New York Times, which detailed how the company was working with law enforcement, including the FBI and Department of Homeland Security in the US.”

Read the whole very informative article and three short good videos inside (#3 in particular).

HOWTO Win on a Rider May 27, 2021 3:45 PM

Thanks Clive,

Entity A and Entity B have a conversation about Entity C

Entity A and B enlist Entity D

So who’s being sarcastic?

I’ve been warned in the past considering my [sarcastic]short signed irrelevance[/sarcastic] I’d like to see someone else warned.

SpaceLifeForm May 28, 2021 12:28 AM

@ ImpossiblyStupid

Beer with me here. I’ll talk slowly.

Do you not know what content hashing is?

Yes. A hash is not a signature.

I could refer you to a file on my site that is my original comment on this post (82d7…, to be specific).

Yes, you could. But why should I chase ghosts on another site?

You can easily verify that the file hash matches the file name, and that a hash of the comment text matches the hash in the file.

Quite possibly true. Can you guarantee that? See MD5. Again, explain to me why I should be chasing a ghost.

None of the content can change without the hashes changing as well.

From your perspective. It’s your server. Again, explain to me why I should be chasing a ghost.

I’m not sure what you’re trying to get at; please be specific on what you think the problem is.

Wait for it…

When you say things like “take the text you see on a site”, you fundamentally don’t seem to understand the process of verifying the content of a comment.

Wait for it…

The comment here is not the original.

Ding! Ding! Ding!

Yes, there are potentially text encoding issues as well as changes that happen when Markdown is converted to HTML.

We have a winner!

None of that matters one whit, though, because I can refer to the comment I wrote on my site, which can be verified to be materially identical to the comment here (i.e., nobody changed either the syntax or semantics).

Materially Identical. You may be able to verify to yourself, but no one else can.
Your hash proves nothing to the outside public. It is not a signature.

Again, this makes me question if you fundamentally understand what a web of trust is.

Yes, I fully understand it.

Why on earth would Bruce or anyone else place their trust in troll farms (or anyone that puts their trust in troll farms)?

No one said that. Very poor strawman.

My point remains that communication metadata can be used to track and (pseudo-anonymously) identify users across sites.

This is true.

Big social media players already do this, but that power in the hands of individuals could actually be a good thing.

This is even more true.

But a SHA-256 hash is not a signature. It’s just not. Do not deceive yourself.

Anonymous May 28, 2021 11:04 AM

@Clive

So I disagree with your findings.

I have no idea where my reply to this went but I find you have a severe case of not seeing the wood for the trees both on the gendered use of language and the outline system I proposed for discussion.

I am aware of the gender studies and other issues and it’s a complete pain in the neck topic to discuss with none experts especially those with a viewpoint which is both conservative and contains biases. And please stop it with those insulting veiled digs. I’d rather not have to go back and dig up your earlier sexist anecdotes and admissions of cutting corners in the workplace. And like I said any andecdote I might share would have me painted as the villain of the piece because, hey, Adam and Eve, double standards blah blah.

As for the system I proposed it directly addresses the critical issue you raised so knock it off with the gaslighting. My proposed solution for discussion very clearly addresses the practical and intuitive aspects of verification in a way which matters. You yourself have discussed the administrative issues of this. My solution is more personal and less reliant on technology and is what people in the real world do every day. You can gold plate it as proportionate from that point on, such as adding deep vetting or electronic verification systems as appropriate, but you also need to treat it as a discussion piece to lift the lid on diferent aspects of the problem. Not everything is technical but a social problem as you yourself said. Now who popularised that on the internet some years ago and had all the (almost exclusively male) bigwigs talking? Who did she get that off? I’m old enough to know almost anything I think of has probably been thought of before. Nor am I always the one who can give it traction. I’ve also got over feeling narked about having my own words fed back to me but stupid I ain’t. That’s why although I didn’t get one job at a university I later ended up getting the head of the universities computer department fired because he was a clown act wasting millions each year which I picked up from a single walkthrough of their site and one conversation. I know my stuff. There’s more to that story but I’ll just say dealing with male ego in the workplace is a pain.

Also as I discussed with the lawyer the other day I am a solutions person not a single issue pressure group and job titles and status don’t impress me one little bit.

Impossibly Stupid May 28, 2021 11:33 AM

@SpaceLifeForm

A hash is not a signature.

I never said it was one. When you start off with a straw man like that, you completely undermine your position. Your comments are in many ways worse than the fake comments that can be generated by algorithms.

But why should I chase ghosts on another site?

Are you even following the topic at hand? The aim is to validate comments that people make. A challenge-response process that I use is one such way. The question you should be asking is: why should I trust that the comments I read here are real? It’s this site that is full of ghosts; my site contains my real comments. The fact that you must “chase” them down is an unfortunate artifact of how web sites evolved to be silos that attempt to control their visitors. Any site could allow distributed commenting, but that would involve acknowledging that they don’t actually own their users speech.

Can you guarantee that?

To the extent that a cryptographically secure hash can guarantee anything, yes; if that’s not enough for you, you’ll find that you have zero security anywhere on the Internet, so you shouldn’t even be here if your safety is important to you. Even MD5 functions as a usable hash for comments and other structured content, because forcing a collision is likely to result in something that contains a lot of nonsense that makes it clear it was the result of an attack.

From your perspective.

No, from a mathematical perspective. You keep showing that you don’t really understand how these things work.

Your hash proves nothing to the outside public. It is not a signature.

Wait, are you seriously saying that me standing behind the comments on my server is less believable than random comments on some random site with some random signature? You trot out signatures as a straw man, yet they seem to be just another thing you don’t understand. How are you going to verify that signature is really me? Explain it without having to “chase ghosts”, which you so bemoan.

No one said that.

You’re the one that said: “That is exactly what troll farms do.”, so if that’s anyone’s straw man to own, it’s another one of yours. If you actually understand what a web of trust is, please explain further how troll farms would use it to trick Bruce (or anyone else) into necessarily posting garbage comments. To those in the know, a web of trust allows exactly the opposite, helping you to identify the associates of bad commenters and preventing them from overrunning a site.

But a SHA-256 hash is not a signature. It?s just not. Do not deceive yourself.

Again, never said it was. That you believe otherwise is a self deception. It’s sadly a popular way to view the world these days. Just the same, I encourage you to stop doing it.

Clive Robinson May 28, 2021 11:44 AM

@ echo,

I find you have a severe case of not seeing the wood for the trees both on the gendered use of language and the outline system I proposed for discussion.

I split the unrelated to the purpose of this blog “gender swipe” and your “outline system” appart instead of conflating them.

Of the former I told you that the gender pronouns I use and how I use them are more friendly than english writing style guides say you should.

As for the latter, I thought I had made it very very clear that I thought including “real ID” in it was a very bad idea and I listed why.

My “So I disagree with your findings” was with respect to this.

I will note that this is not the first time you have picked on my over what you think is right or wrong with your touchy feely view of the world. You don’t do it at others and you do do it at me over a wide variety of things.

I’m sorry you feel that way, but to me anyway it looks like for some reason you are biased. Why I don’t know, but bassed on previous occasions I to avoid further issues I’m not going to respond any further to comments or part comments you direct at me not directly related to the subject matter of this blog.

echo May 28, 2021 1:30 PM

@Clive

I will note that this is not the first time you have picked on my over what you think is right or wrong with your touchy feely view of the world. You don’t do it at others and you do do it at me over a wide variety of things.

Pay attention? Maybe I have a problem with your piledriving just because certified handwave status blah blah. There are things you miss and you are not the be all and end all. And I’m “touchy feely”? Are you going to be calling me the perjorative verson of “woke” next? Tread carefully.

You can try and frame things as you like and pull up the drawbridge and pull rank and all the other stuff and hide behind a wall of maths and specifications and boys toys but I know BS when I read it. Well, okay if those are the rules you’re going to stick by at least you’ll be shutting up with you long fireside chat yarns or observations about politicians and their ilk or your more sexist escapades showing you up to be a fist bumping “Good Ol’ Boy” because “Ooh off topic innit”? Or wasn’t it that kind of “off topic”? Let’s see how long you can keep that up before you burst. I’ll be waiting.

ImpossiblyStupid May 28, 2021 4:49 PM

@ ImpossiblyStupid, Clive, SpaceLifeForm

Wait, are you seriously saying that me standing behind the comments on my server is less believable than random comments on some random site with some random signature?

Yes. Signatures are not random. Your website proves nothing. Produce a PrivateKey/PublicKey pair, Announce your PublicKey. Announce the crypto algorithm. Sign the posts using your PrivateKey.

You trot out signatures as a straw man, yet they seem to be just another thing you don’t understand.

You may want to look in a mirror.

How are you going to verify that signature is really me?

Ding! Ding! Ding! We have a winner!

Explain it without having to “chase ghosts”, which you so bemoan.

Spot the problem?

Impossibly Stupid May 30, 2021 5:30 PM

@Person pretending to be me

Yes. Signatures are not random. Your website proves nothing. Produce a PrivateKey/PublicKey pair, Announce your PublicKey. Announce the crypto algorithm. Sign the posts using your PrivateKey.

I’m at the point that I’m ready to write you off as a troll. Perhaps you’re used to being the smartest person in the room, but this is a security blog and people here will call you out on your BS. When you hand wave things like “Announce your PublicKey”, you show no understanding of how that fundamentally does not establish the trust you assert it does. You have yet to answer the epistemological question of identity. You have yet to outline a process that doesn’t involve “chasing ghosts”.

> How are you going to verify that signature is really me?

Ding! Ding! Ding! We have a winner!

Since you don’t answer that question, you don’t win anything. If you are indeed arguing in good faith and not a troll, please demonstrate it by creating a pseudo-anonymous identity by the process you give at the start of your post (or any other one of your choosing), and then use it for all your future posts so that people here can verify it’s you. If it is demonstrably better than my process, I’ll adopt it and then you can do more than pretend you’re a winner.

Spot the problem?

No. If you have a point to make, actually spell it out. Just because Bruce’s blog allows fake comment attribution only speaks to the topic at hand. If you think your surface fake of me is a problem, then all you do is make everyone question the legitimacy of every post here. Since my actual posts are stored on my own server, though, your fake would fail any attempts at validation.

It is not at all obvious to me that your precious signature solution is better for this blog. As the old saying goes, put up or shut up. Because right now, all your straw men have done nothing to show that you can pretend to be me in any material way.

Weather May 30, 2021 8:19 PM

@impossabilly stupid
Most of my post have technical details that can be checked, if you are claiming I should release some information to you them I don’t want to, its not there tactics ,is weather I’m having a bad day.
Pfm like exe extension, did anyone follow up on that, I know it started with ‘P’ a quick search in regedit should find it.
Like bullies at school some thing is wrong external, but they posn.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.