Talk:AI-complete

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Wiki Education Foundation-supported course assignment[edit]

This article was the subject of a Wiki Education Foundation-supported course assignment, between 26 August 2019 and 11 December 2019. Further details are available on the course page. Student editor(s): Rusty858.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 13:11, 16 January 2022 (UTC)[reply]

untitled[edit]

I've expanded the etymology of the term. I deleted a paragraph about problems with the AI field. It was correct, but not related to the topic of the page. --LC


I deleted the "MindForth" code, which does not belong here, also known as "example" in previous edits. Montalvo 8 July 2005 05:25 (UTC)


If this term was coined by Fanya S. Montalvo, why is there reference for this author?--84.190.147.182 13:19, 19 October 2005 (UTC)[reply]


I'm not an expert in computer AI terminology, but perhaps some parts of this article ought to at least be clarified. The phrase "AI-hard" and a direct reference to "NP-hard" in computational complexity theory makes it sound like an AI-hard problem is one to which every problem in AI (presumably the class of problems that can be solved effectively by strong AI) can be reduced. Yet, problems that are given as examples of AI-complete (therefore AI-hard, I assume?) seem to not have the property of every problem solvable by strong AI being reducible to those problems. For instance, speech recognition cannot be reduced to go-playing in any way I am familiar with. I understand quite well that there is not a perfect analogy between the class NP and the class AI, but perhaps this should be clarified some more in the article? Am I correct to think that the coining of this terminology was more professional humor than anything else? —Preceding unsigned comment added by 165.123.239.160 (talk) 20:25, 14 September 2008 (UTC)[reply]


this article has got to be total bs. if this is what the AI community has come down to, God help us... --unsigned


ADDED TO LIST OF AI-complete problems: "AI Peer Review" without waiting for my "Draft:AI Peer Review" article, now waiting for review, to be accepted. Of course, when accepted I'll add the link if Wiki-AI doesn't add it artificially. Gravitoelectrotensor (talk) 19:20, 16 October 2018 (UTC)[reply]

Categories[edit]

Why is this article tagged with Computational complexity theory? It has nothing to do with it. It just borrows from words from complexity theory. This topic would not appear in any text book on complexity theory. I removed the category, but was reverted [1]. --Robin (talk) 14:08, 5 January 2010 (UTC)[reply]

Hello, yes I reverted. The reasoning is that the formalisation in the article is a generalisation of conventional computational complexity: it defines a computation as having both human and computer components and there are equivalence classes of these computations. The article Complete (complexity) is in the computational complexity category so AI-complete, as a generalisation, appears (to me) to belong there too. It probably doesn't appear in textbooks because it's quite recent. Disclaimer: I have no connection with this work but think it is an interesting and useful notion. pgr94 (talk) 13:05, 9 January 2010 (UTC)[reply]
Hi, thanks for the response. The thing is, that "complete" has an extremely precise mathematical definition on computational complexity theory. Indeed, all of computational complexity theory is very mathematical, and there are no vague notions. AI-complete, on the other hand, is a fairly vague notion, and an informal one at that. There's no mathematical problem that is proven to be AI-complete, and indeed that might not be possible, since the problem "make machines behave like humans" would not even be expressible mathematically. It seems like AI-complete borrows terminology from complexity theory just to indicate the hardness of some AI problems in an informal sense. These problems are not complete in the technical sense of complexity theory. Moreover, "AI" is not a complexity class in complexity theory, unlike NP-complete (related to the class NP (complexity)) or PSPACE-complete (related to the class PSPACE). For all these reasons, I don't think this article should be in Category: Computational complexity theory. --Robin (talk) 14:53, 9 January 2010 (UTC)[reply]

Distinction "Artificial general intelligence" and "AI-complete"[edit]

In my opinion we need a clear distinction between Artificial general intelligence and AI-complete in both articles. Currently they are described as being pretty much the same; even so it's somehow not easy to distinguish between them.

--89.204.154.200 (talk) 20:45, 10 November 2017 (UTC)[reply]

"AI complete" is a class of problems, "AGI" is a capability of machines. An "AI-complete" problem is a problem that requires AGI to be solved perfectly. However, "weak" or "narrow" solutions to AI-complete problems can be useful, if not perfect. ---- CharlesGillingham (talk) 19:15, 18 November 2017 (UTC)[reply]

Meri yah problem theek kar do[edit]

Krishna 2409:4043:40C:E78:0:0:13A0:50A1 (talk) 13:37, 4 April 2022 (UTC)[reply]

Tom Brady[edit]

Mathias R shaner aged 42 can add 2000 years to Brady and his family’s life if he wants it. Mathias R shaner 8148800990 2600:387:15:612:0:0:0:6 (talk) 07:05, 20 April 2022 (UTC)[reply]

Eagles[edit]

Techincr School ijer — Preceding unsigned comment added by 105.112.230.185 (talk) 16:44, 8 December 2022 (UTC)[reply]

Wiki Education assignment: Research Process and Methodology - SP23 - Sect 201 - Thu[edit]

This article was the subject of a Wiki Education Foundation-supported course assignment, between 25 January 2023 and 5 May 2023. Further details are available on the course page. Student editor(s): Liliability (article contribs).

— Assignment last updated by Liliability (talk) 21:41, 15 April 2023 (UTC)[reply]

send #sender ¶ paste ¶ ("Excel format")  ::[edit]

{ Value } " >O< " [ #send #sender ¶ paste ¶ ("Excel format")  ::   ] 2605:B100:D22:93AE:0:11:4BF8:2F01 (talk) 09:36, 29 October 2023 (UTC)[reply]

Machine translation[edit]

The following paragraph was removed from the main article. It is 10 years old and asserts (without citation) many requirements for good machine translation. All the while current machine translation Software meets none of those requirements, yet still yields very good output. --Maximilian Janisch (talk) 16:05, 3 December 2023 (UTC)[reply]


To translate accurately, a machine must be able to understand the text. It must be able to follow the author's argument, so it must have some ability to reason. It must have extensive world knowledge so that it knows what is being discussed — it must at least be familiar with all the same commonsense facts that the average human translator knows. Some of this knowledge is in the form of facts that can be explicitly represented, but some knowledge is unconscious and closely tied to the human body: for example, the machine may need to understand how an ocean makes one feel to accurately translate a specific metaphor in the text. It must also model the authors' goals, intentions, and emotional states to accurately reproduce them in a new language. In short, the machine is required to have wide variety of human intellectual skills, including reason, commonsense knowledge and the intuitions that underlie motion and manipulation, perception, and social intelligence. Machine translation, therefore, is believed to be AI-complete: it may require strong AI to be done as well as humans can do it. Maximilian Janisch (talk) 16:05, 3 December 2023 (UTC)[reply]