Comments

Denton Scratchin Artificial February 7, 2019 10:51 AM

Oh dear – I have whinged here before about AI hype.

It seems that the article is mainly about the views of senior party officials and military officers about how important it is to have a leading role in something they almost certainly have scant understanding of. And such articles rarely explain what is this ‘AI’ they are referring to.

As far as I’m aware, the only advances that have been made in Artificial Intelligence in the last 40-odd years has been the application of neural nets and machine learning to transform very large datasets into decision graphs.

Sure, the consequences are dramatic: self-driving cars, semi-autonomous killing machines, etc.; but neural nets and machine learning are discussed in a textbook on my bookshelf by Russel and Norvig from 1995 (Artificial Intelligence: A Modern Approach). It’s a college textbook – everything in it was already old-hat when it was published. And even so, it skims over these subjects.

It’s notable that the kinds of programs that emerge from this approach are completely unable to explain their reasoning, even in principle. In fact since they consist of little more than weighted decision graphs, nobody else can explain their reasoning either.

Back in the 80s I did some work with expert systems. These systems were based on logical reasoning, and the principles from which expert systems reasoned were provided by honest-to-goodness domain experts. Consequently these same experts could provide information that could be used to backtrack through the reasoning on request. They were expensive to construct: you needed a domain expert, for starters, and you also needed an expert on expert systems, to guide the domain expert on how to express her expertise. There was a lot of trial and error. And computers were much slower then. It took a long time.

I realize that this supposedly-modern ML-based AI (when combined with big data) can extend beyond the reach of any human domain expert; but I envisage new developments in AI that will do that, AND be able to explain themselves.

I have no idea if anyone is working in that field.

AJWM February 7, 2019 11:32 AM

As far as I’m aware, the only advances that have been made in Artificial Intelligence in the last 40-odd years has been the application of neural nets and machine learning to transform very large datasets into decision graphs.

The only thing really new about that is almost entirely related to the fact that processors are faster and memory cheaper now than they were back then. Now, that increased horsepower does lead to a qualitative difference between current RNNs and the character-recognizing perceptron I coded (in APL!) back in the mid-70s, but there’s no real aha! moment in there.

vas pup February 7, 2019 11:37 AM

“In Xinjiang, we use big data AI to fight terrorists. We have intercepted 1200 terror organizations when still planning an attack. We use technology to identify and locate activities of terrorists, including the smart city system. We have a face recognition system, and for all terrorists there is a database.”
If you take a look around the globe (e.g. France, Venezuela, other) you could see the era of rising more or less violent mass protests against constitutional order.
I see the future of fighting it with AI powered weaponized with less than lethal technology drones. Market is the whole world – unlimited. It’ll stop jeopardizing life of protesters (when protest transformed into riot mob)as well of LEOs by suppressing any types of attempts to substitute democracy by ochlocracy. Same could supplement prevention of any type of mass illegal entrance through national borders (Europe, USA, other developed countries).
When US will argue endlessly on moral side, I hope Chine will lead on that and reap all financial and political benefits. Just observation after reading the report. Moreover, for the function of world or regional policeman/peace keeping, it is useful for application within the areas where civilians and militants are closely located to avoid civilian collateral casualties.

wiredogf February 7, 2019 11:54 AM

“new developments in AI that will do that, AND be able to explain themselves.”

Why would we expect that, when natural intelligences often can’t?

Alyer Babtu February 7, 2019 12:26 PM

AI … explain themselves

Highly recommended in regard to this issue, at least as explainable if not self explaining, the work and writings of Stephen Grossberg, Boston.

POLAR February 7, 2019 2:59 PM

For the record:

AI scientists are a bunch of naif, delusional kids fiddling with the parts of something magnitudes more dangerous than ICBMs. Sometimes they may put two or three parts together and.. Oh Joy, the beeps and the lights it makes! So every kid in the block fiddles more and fiddles faster, their small hands slapping more and more pieces together, good golly this must be the greatest puzzle of them all!

THIS IS NOT SCIENCE, THIS IS GUESSING THE ASSEMBLY OF A DOOMSDAY DEVICE WITHOUT THE INSTRUCTIONS.

The code, the processors, the cables and the blinking lights seem innocuous enough, as well as the parts and sub assemblies..so why not finish the Great Quest?
And on the day you’ve assembled it all, you’ll realize that there’s no button to start the doomsday, because it’s already on and self conscious.

Nigel Seel February 7, 2019 3:08 PM

I strongly suspect that the road to conceptual breakthroughs to more human-like AI architectures is one which first has to run through those existing ideas and drive them to their limits. If the next step was easy, or even apparent, we’d have a plausible roadmap already. By getting to the front of the engineering state-of-the-art, China can expect to be well placed to initiate and exploit the next set of conceptual advances – indeed, there is no other way.

Impossibly Stupid February 7, 2019 4:41 PM

My summary of the report would be: China seeks to be the biggest parasite the world has ever seen. Whether or not you believe that, or believe that it’s no worse than how the West has sought to exploit the rest of the world, the business implications should be pretty clear and relatively unsurprising.

The whole AI angle is not really interesting. It’s really just a proxy for all the hot new technological innovation and automation that pretty much everyone has been embracing for the last 50 years of exponential returns. The only thing notable with respect to China in this regard is how far it has come from being a third-world country, how quickly they’ve been able to do that, and the size of the population they’ve been able to do that for.

wumpus February 7, 2019 6:52 PM

@Denton

While “neural net simulators/emulators” might be the only effective technique to come out of AI research (of course there is a similarity between “AI” and “alt-medicine”: when they are understood to work they become “computer science” and “medicine” respectively. Only “machine learning” gets to keep the “AI” label as there doesn’t need to be any way to understand how it works.)

I’m guessing the reason it has taken off in the last few years is simple: nVidia has started building chips capable of doing the teraflops of computations needed to make machine learning a thing.

I think the anti-hype (FUD) is at least a fanciful as the hype. But machine learning is worse than “garbage in/garbage out”. It can be “really good data in”, “machine realizes a mistake/bias in the presentation data and not only includes the bias but amplifies it to the point of breakage” out.

Lord Kronos February 7, 2019 10:08 PM

Wumpus your pure unbounded guesswork absolutely belies zero subject knowledge. That was unintelligible.

Denton Scratch February 8, 2019 3:26 AM

Re. ‘Denton Scratchin Artificial’:

That was my post. I don’t know how my handle got transformed in that weird way, but it seems to have been stored in the forms cache in this browser.

I am currently using a laptop with a rather ‘challenged’ keyboard, that constantly re-positions the cursor without warning and against my wishes, so that what I subsequently type gets inserted in the wrong place. It’s very annoying.

Anyhow, I do not suspect any kind of upstream interference – I have an adequate explanation.

Denton Scratch February 8, 2019 5:08 AM

@wiredogf:

“Why would we expect that, when natural intelligences often can’t?”

I didn’t say that you would expect it. I didn’t even say that I expected it. I said I envisaged it. It is my supposition that sooner or later there will be real developments in AI (beyond marrying it with big data), and the most useful development would be if ML-based AI could explain itself. Once that’s done, one could imagine having a rational discussion with a machine, for example.

The limitations of natural intelligence are clear; AI is different, because it is an artifact, and in principle it can be improved.

Dain Bramage Mk IV February 8, 2019 7:42 AM

@Denton Scratch

keyboard

I hope that bug is not catching. Given the ubiquity of keyboards in programming and input, it would have a catastrophic effect on ML and AI, wouzbcnxnxndsk it ?

Faustus February 8, 2019 8:50 AM

@ Denton

I give you credit for being coherent and at least recognizing that your knowledge is 25 years old. And at least you have knowledge unlike many who think an opinion is sufficient.

I have been working in AI for most of my career. I got paid to be a prolog programmer, which was really fun and satisfying. And I created a semantic web platform that spun full web 2.0 applications up from semantic web descriptions. Its database was an inference engine.

I experienced the first AI bust in the 80s, but prolog remained a useful tool for many software reengineering applications. AI has progressed far since then, beating humans at most games of skill, writing proofs and verifying systems and finding their faults in a targeted manner, understanding voice, pictures and video, translating human languages, and data mining, to name a few things.

I am currently preparing to launch an AI system that works on an evolutionary model versus a neural one. It is a general problem solver, and it is easily configured to work in a wide range of domains. It is designed to expose its internal understanding, unlike neural nets, which are opaque, as people have noted. It already has had very interesting successes.

My website is going to be novel. No javascript. No tracking beyond the server log. No collecting of any information that is not explicitly sent to us.

Furthermore, no military or police usage of the system unless it is aimed at reducing people in jail or reducing warfare. No usage for tracking people or invading their privacy. No experiments on people without the ok from an ethics review board and signed physical consent forms that are clear and not coercive. (Obviously people may try to obfuscate the data, but we will not willingly collaborate.)

The development was self funded. I did it out of personal interest. There will be no venture capital or IPO, at least while I’m alive. Extra money will go to help people in my host country. Ownership will increasingly spread out among the employees of the company. The driver will be curiosity, not money; collaboration, not profit taking.

I will invite people from this blog and a host of other sources to join in and see what we can get done. I imagine we will be working with a lot of small companies on their way up. Let’s give them an advantage.

I have had a privileged life in many respects (harrowing in others). I want to pay the privilege forward, to deserving young people who are more into accomplishment than complaining.

I am working on branding and getting up a web site within three months. I will solicit challenges from interested collaborators. Within a year I will run a full sized pilot, and it will be fully launched within two years.

When the website is ready I will drop my handle and let you guys see what I have been doing.

Faustus February 8, 2019 2:37 PM

@ 1&1~=Umm

The article you quote is full of neural network successes. If a shallow neural network doesn’t work, you simply use a deeper one. As the article points out, deeper networks tend to be more efficient.

Bear February 8, 2019 2:58 PM

AI could easily be as “explainable” as expert reasoning – and by largely the same process. But the process by which we could arrive at it wouldn’t satisfy the folk who want these explanations to provide any proof of soundness. What we could do is create a system to make up plausible explanations consistent with domain knowledge, given the information and the decision that was made. The explanation would be a separate, and entirely independent, product from the decision rather than created by the process that creates the decision, as the people demanding it hope.

At best, it could function as a “second opinion” to reject decisions for which it can formulate no explanation that proceeds from domain knowledge. I think one could sell it on that basis, but you shouldn’t sell it as a decision process that explains itself.

When experts put into words how they work things out, most of the time they’re just guessing. They use domain knowledge to reject bad ideas; with practice they reject them without even thinking about them. That’s why they’re experts. But where do the ideas come from? Even they don’t know. They make up something, consistent with their domain knowledge, to explain their decision or design – but there’s no evidence that they’re right about the idea that that’s HOW they developed their decision or design.

ismar February 8, 2019 3:13 PM

In his – The Book of Why – Judea Pearl explains the shortcomings of the current approach to analyzing big data :

“brings the book to a close by coming back to the problem that initially led me to causation: the problem of automating human-level intelligence (sometimes called “strong AI”). I believe that causal reasoning is essential for machines to communicate with us in our own language about policies, experiments, explanations, theories, regret, responsibility, free will, and obligations—and, eventually, to make their own moral decisions.
If I could sum up the message of this book in one pithy phrase, it would be that you are smarter than your data. Data do not understand causes and effects; humans do. I hope that the new science of causal inference will enable us to better understand how we do it, because there is no better way to understand ourselves than by emulating ourselves. In the age of computers, this new understanding also brings with it the prospect of amplifying our innate abilities so that we can make better sense of data, be it big or small.”

1&1~=Umm February 8, 2019 5:13 PM

@ismar:

“In his – The Book of Why – Judea Pearl explains the shortcomings of the current approach to analyzing big data…”

“I believe that causal reasoning is essential for machines to communicate with us…”: Judea Pearl.

For those who want to know about the science of causal inference, without getting a Turing Award in the process.

I’ll just leave this here,

https://towardsdatascience.com/why-do-we-need-causality-in-data-science-aec710da021e

Impossibly Stupid February 8, 2019 10:03 PM

@Bear

AI could easily be as “explainable” as expert reasoning – and by largely the same process.

Oh, goodness, no. You clearly don’t have much idea how the current crop of “AI” works. Even if you take something simple (in that it is a task that neural networks can be easily taught to do) like recognizing a traffic sign, you’ll find that nobody is able to demonstrate an algorithm that can learn and then explain/describe what, say, a “stop sign” is at a high enough level of abstraction that you can conceptually agree (or disagree) with that description.

When experts put into words how they work things out, most of the time they’re just guessing.

And you don’t seem to understand how intelligence/expertise works, either. A child can be trained to be an “expert” stop sign recognizer, and they’d be able to explain their actual thought process with very little guesswork. They might not know adult words like “octagon”, but they will recognize all the distinct shapes in question. There is thinking involved, not just mindless guessing.

This is most evident when machine learning system get things wrong. The type of errors that Watson made on Jeopardy, and that continue to be made to this day on other pseudo-AI systems, demonstrate a deep lack of understanding or reasoning about the domain being tested. When a child fails to properly recognize a traffic sign, they don’t mess up anywhere near as bad as when a trained NN fails.

Denton Scratch February 9, 2019 4:36 AM

@Faustus
“The article you quote is full of neural network successes.”

I was surprised to learn from that article how little advance there has been in neural network theory. It’s pretty appalling that, after 30 years, the best approach to designing a NN architecture for a given problem is to design a bunch of different architectures, throw the problem at each of them, and see which one works best.

I think this may be because of the successive episodes of hype that followed successive waves of AI development between the sixties and the eighties. With each wave, we were told that AI was going to surpass human abilities within a decade or so. I think serious researchers began to turn their backs on the field after the sixth wave of hype turned out to be yet more bull.

Re. Prolog:
I had to teach a course on Prolog programming to a group of HND students in the eighties. I was a temporary visiting lecturer, the college had low standards (they hired me!), and I had to learn the language from scratch to run the course. Prolog was a special favourite of the head-of-department, for some reason.

We used Prolog to model logic circuitry; that is just about the most trivial use that Prolog could be put to. Prolog should enable you to perform ‘backwards reasoning’ – reasoning from consequences to causes. I failed to learn how to make it do that (on any model that was worth paying attention to).

I didn’t spend a lot of time on Prolog; I only taught it for a term, and I reckon it takes even a clever person a good six months to learn the rudiments of a new language properly. But based on that limited exposure, Prolog didn’t impress me. I’m impressed that you are able to do useful work with it – I don’t think there is a lot of useful software that was written in Prolog.

Faustus February 9, 2019 8:34 AM

@ Denton Scratch

That article is about scientists figuring out what the limits are on width and depth of neural networks for certain tasks. It is analogous to deciding how large an engine must be for a certain physical task.

It glosses itself as “Neural networks can be as unpredictable as they are powerful. Now mathematicians are beginning to reveal how a neural network’s form will influence its function.” I do not understand that as a critique, but simply progress in the field.

I am always interested how negative people are about technology they have limited exposure to. Sour grapes? You are constantly saying the equivalent of “I have done nothing with AI for the last 25 years, so nothing must have happened.” This is just not reality.

I myself am not particularly interested in neural nets. My system uses a very different technology, one that exposes its understanding of the structure of a problem.

If you couldn’t get prolog to reason backwards you missed its essential capability. A good book might have been helpful. Can you give an example of the kind of backwards reasoning problem you are thinking of? Prolog also makes it easy to write emulators and domain specific languages, which sounds like how you used it.

Prolog is used extensively in programming game AI. It is great for tasks like automatically rewriting a COBOL VSAM application as PHP over SQL. It is very useful for academic work in theorem proving since it is at heart a theorem prover. It was also an inspiration for many other languages, especially erlang and many “matching” constructions in functional languages.

Today I principally use prolog to solve specific constraint problems or to prototype problems that I will later rewrite in golang, which is faster and gives the programmer finer grained control.

Faustus February 9, 2019 8:37 AM

@ 1&1~=Umm

Thanks. Your link on causal reasoning leads into a lot of very useful information about causal reasoning and AI/ML/DS in general.

Faustus February 9, 2019 9:15 AM

I tell you: one must still have chaos in one, to give birth to a dancing star. I tell you: ye have still chaos in you.

Alas! There cometh the time when man will no longer give birth to any star. Alas! There cometh the time of the most despicable man, who can no longer despise himself.

Lo! I show you THE LAST MAN.

“What is love? What is creation? What is longing? What is a star?”—so asketh the last man and blinketh.

The earth hath then become small, and on it there hoppeth the last man who maketh everything small. His species is ineradicable like that of the ground-flea; the last man liveth longest.

“We have discovered happiness”—say the last men, and blink thereby.

Nietzsche, “Thus Spake Zarathustra”

Denton Scratch February 9, 2019 10:54 AM

@Faustus

“”You are constantly saying the equivalent of “I have done nothing with AI for the last 25 years, so nothing must have happened.” This is just not reality.”

You are quite right; the reality is that I have never done more than dabble in AI as an interested amateur; and I have never claimed to.

I have noted the articles to which people here have drawn my attention; I pay attention to material about developments in AI, because I remain interested. I have worked in the field only twice – once on porting an expert system, once on a class on Prolog.

I do not think I am wrong in thinking that little progress has been made on the fundamentals of AI in 25 years.

Indeed, Prolog is in essence a theorem prover. My difficulties in getting it to do backward reasoning were to do with the fact that my students’ project was to model digital logic, which didn’t involvve any such backwards reasoning; and that I was a temporary, visiting lecturer. I was not paid to do research into backwards reasoning using Prolog. I worked with the language for barely 8 weeks. It’s hardly surprising that I haven’t scaled the peaks of the language’s strengths.

I sense that perhaps you are a little defensive about the limited progress that has been made in AI over the last 40 years, if one sets aside applying 40-year-old principles to big data. There’s no need for defensiveness; as I’ve said, I’m impressed that you are doing useful work with that language, and I’m glad to be better-informed about the beginnings of some theoretical work on the design of neural nets.

Faustus February 9, 2019 4:15 PM

@ Denton

I am not defensive. I am actually just frustrated about the persistence that people show when holding on to outdated or incorrect views. I don’t think its usually intentional and I don’t think I am immune.

It makes sense. In our society we usually lose status when we are wrong. We will do anything to avoid that result. In my thirties I was with a zen teacher who reversed this, who observed that making mistakes is a symptom of learning, of pushing the envelope of your knowledge. If you can recognize the mistake and learn from it, you are in a better, not worse, position. You have learned something. If you are not making mistakes you are swimming in the shallow end of your mental pool. Consciously, at least, it changed some of the valences for me.

The same thing happens with bitcoin: People’s certainty of their views is not proportional to their knowledge.

But you actually know a good deal about AI, so maybe you can hear this. You sort of noted it when you talked about your 1995 text: Neural nets did not figure as prominently as today. There has been amazing results in their design and application since 1995. Here is a list of AI accomplishments, mostly in this millenium: https://en.wikipedia.org/wiki/Progress_in_artificial_intelligence. A book written freshly about AI today would not look like your 1995 text, which was like a phonebook of possibilities to which the focus had not yet been found.

Besides that: Haven’t you noticed the vast improvement of AI in image recognition, language translation, data mining, robotics, and autopilots for planes and cars? AI wasn’t really a thing in 1995, the bubble had burst. But it is a central technology of today, maybe THE central technology. Why sniff your nose at it?

Are you saying that no major new AI technology has been invented since 1995? I can think of antecedents for today’s technologies prior to 1995. But why does that detail matter? The essence is that ideas that could not be applied in 1995 in any practical sense are running the world today. In no sense can you claim that AI has been static without saying the same of every major field of science and technology. Everything has antecedents.

I project that maybe you wonder if you missed the AI boat and now have to reduce cognitive dissonance by downplaying the value of AI. But there are still opportunities.

I find most of today’s statistical and neural technologies to be pretty darn boring. Lots of parameters to calculate and calculate and calculate. And its learning is not usually visible in a human friendly way. It’s not inspiring.

But I am launching a genetic AI technology that lets you focus on designing problems and it handles the solution generation. Sort of a higher level prolog. You might add datatypes or operations, but you already have an engine. It creates a structured solution that is isomorphic to a concept graph for the problem, so you can see exactly how the system understands the problem.

I am starting to create demos and white papers and a website. People who see it in action are impressed and I want to capture that excitement. But I am always looking for challenge problems. Finding optimal or near optimal superpermutations is a challenge I accepted recently.

So, if you are not totally dead to AI, participate! Challenge my system! I mean this in a totally friendly way.

Gweihir February 9, 2019 7:26 PM

@Denton Scratchin Artificial:

As far as I’m aware, the only advances that have been made in Artificial Intelligence in the last 40-odd years has been the application of neural nets and machine learning to transform very large datasets into decision graphs.

Best summary of what is going on I have found so far.

I realize that this supposedly-modern ML-based AI (when combined with big data) can extend beyond the reach of any human domain expert; but I envisage new developments in AI that will do that, AND be able to explain themselves.

That would require insight, and we will not get that in machines anytime soon, despite what all the AI fanatics claim. A leading member of the Watson team told me “not in the next 50 years” without even thinking about it recently and that is also what I see. It may simply be impossible to do with computers in the first place or it is wayyyyy off.

AI Future February 10, 2019 3:15 PM

Good article. Interesting:


One scholar at a Chinese think tank told me that he looks forward to a world in AI will make it “impossible” to “commit a crime without being caught,” a sentiment that echoes the marketing materials put out by Chinese AI surveillance companies.

Impossibly Stupid February 10, 2019 11:03 PM

@Faustus

Haven’t you noticed the vast improvement of AI in image recognition, language translation, data mining, robotics, and autopilots for planes and cars?

No. I’ve noticed a great deal of hardware improvement, following Moore’s Law. I’ve noticed other advances in sensors and motors and batteries that give the algorithms an enhanced coupling with the environment. But I don’t really see anyone who has advanced any groundbreakingly new technology for producing an AI, or even anyone forwarding a solid theory for what intelligence is such that it can be implemented by a machine.

The essence is that ideas that could not be applied in 1995 in any practical sense are running the world today.

But that doesn’t necessarily mean anything has fundamentally improved in AI. I recall old videos of self-driving cars, perhaps more than 20 years ago. The hitch was that they couldn’t drive in real-time simply because the computing power wasn’t available. Just because we have exponentially better computers today and therefore make those things “practical” now doesn’t say much about how ideas have or haven’t changed.

If anything, the success of the CNN approach might steer too many people down a path of short-term results, thereby keeping them from putting the the kind of effort that might be required to actually produce AI. The whole reason the first AI bubble burst is that people saw great initial success in a limited domain, but were never able to generalize it to produce something that crossed the uncanny valley. Everything I see about the fad/hype of the current bubble leads me to think that it, too, will burst soon enough.

People who see it in action are impressed and I want to capture that excitement. But I am always looking for challenge problems.

Humor. If your “AI” can’t figure out what is funny, you really don’t have much of an AI.

1&1~=Umm February 11, 2019 2:31 AM

@Impossibly Stupid:

“If your “AI” can’t figure out what is funny, you really don’t have much of an AI.”

The first step of which is either childish wonder, or adult appreciation of irony.

In between, the fun with “double meaning” and homophones, and sadly the realisation that many find superiority through belittling others, to bolster their self esteem.

Apparently Japanese has quite a few homophones and some years ago now Japanese reporters were quite impressed that –I think it was Nippon Electric– had taught a computer to make sense of such.

I’ve yet to see a computer correctly understand by explanation “Buffalo buffalo buffalo buffalo buffalo”[1] though some people have “programed in the “Buffalo buffalo buffalo buffalo buffalo buffalo buffalo buffalo” varient which is cheating 😉

[1] For those that are not aware of it, you can “look it up on the Internet” or deduce it. For understanding all you need to know is that Buffalo is a place (Proper noun, an animal (both singular and plural noun) and a process of intimidating (verb) so any combination of “buffalo” from three to eight –arguably more– is a valid sentance in English. Similar sentences can be made with other words that have been absorbed into the English language where the singular and plural noun is spelt the same way, but also capitalisms which give rise to “Polish polish polish” and a number of others.

Denton Scratch February 11, 2019 3:15 AM

@Faustus

Re. ‘1995’

You keep referring to this date, as if my experience with AI was pickled at that time. I just want to note that that is just the publication date of the only book on my bookshelf on the subject of AI. I said it’s a college textbook; but I never studied AI at college. I bought the book second-hand, purely out of interest, probably around 2001. My last practical exposure to AI was in 1989; that was the Prolog course I taught.

You haven’t drawn my attention to any real developments in AI since about 1985, beyond what can be achieved using bigger, faster computers – with the exception of your original research, which I will look forward to reading about. Unfortunately I’m not going to be ordering and reading academic tracts on the subject – I’m getting on in years, my powers of concentration are diminishing, and I no longer have sufficient attention-span to bone-up on a difficult subject out of sheer interest.

But I look forward to visiting your website, once you publish it.

Alyer Babtu February 11, 2019 3:13 PM

@ several above, re causal reasoning/imference etc.

Interesting critique of this methodology, by the late (2008) David Freedman, UC Berkeley Statistics Department, in “Statistical Models and Causal Inference: A Dialogue with the Social Sciences”, especially chapters 14 and 15.

Faustus February 13, 2019 4:49 PM

@ Denton

1995 was just a convenient designator for a time before this ai boom and after the previous one. I have that 1995 text too and it is very different than later texts because the industry has zeroed in on certain techniques and largely abandoned many others.

Perhaps we had different expectations. The previous bust had led me to focus on very practical applications and the post 2000 explosion took me somewhat by surprise.

I’ll let the gang know when my site is up.

vas pup February 18, 2019 3:06 PM

@all:
Some related quotes of Chines wisdom:

“It is more shameful to distrust our friends than to be deceived by them”. Confucius

[that was before wiring friends by authority}

“He who does not trust enough, Will not be trusted.” Lao Tzu

[yeah – trust is two-way street]

“There is no instance of a nation benefitting from prolonged warfare.” Sun Tzu

[Wow! Is this applied to exceptional nations as well? Soviet Union in Afganistan, US in Vietnam, you name it.]

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.