Reclaiming AI as a theoretical tool for cognitive science

tags
AI

Notes

AI is one of the cognitive sciences

NOTER_PAGE: (2 0.755188679245283 . 0.6233211233211233)

the term ‘AI’ was also used to refer to the aim of using com- putational tools to develop theories of natural cognition.

NOTER_PAGE: (2 0.7570754716981132 . 0.08608058608058608)

for decades there was a close dialogue between the fields of AI and cognitive psychology

NOTER_PAGE: (2 0.7707547169811321 . 0.5915750915750916)

AI (sometimes called cognitive simulation, or information pro- cessing psychology)

NOTER_PAGE: (2 0.8599056603773585 . 0.2515262515262515)

AI qua information processing psychology was built on the idea that human cognition is, or can be scientifically un- derstood as, a form of computation; this view is also known as (minimal) computationalism

NOTER_PAGE: (3 0.3768867924528302 . 0.10683760683760683)

how this prac- tice creates distorted and impoverished views of ourselves and deteriorates our theoretical understanding of cognition,

NOTER_PAGE: (3 0.6386792452830189 . 0.8174603174603174)

mathematical proof of inherent in- tractability (formally, NP-hardness) of the task that these AI engineers set themselves.

NOTER_PAGE: (4 0.13867924528301886 . 0.2490842490842491)

states of the art across many [AI] tasks are being im- proved[, but] the question is whether the hill we are climbing so rapidly is the right hill.

NOTER_PAGE: (4 0.19245283018867926 . 0.8070818070818071)

Flying pigs are also possible in principle; pos- sible in principle bakes no bread.

NOTER_PAGE: (4 0.5372641509433962 . 0.1343101343101343)

lower-bound on the real-world complexity of constructing human-like AI from human data.

NOTER_PAGE: (5 0.45943396226415095 . 0.568986568986569)

expresses candidate algo- rithms A using a specification language, L_A.

NOTER_PAGE: (5 0.5688679245283019 . 0.7338217338217338)

extremely low bar for what counts as “approximate”.

NOTER_PAGE: (6 0.1089622641509434 . 0.6697191697191697)

perform human-like with a probability that is non-negligibly higher than chance level.

NOTER_PAGE: (6 0.21462264150943397 . 0.6306471306471306)

upper bound K on the size of the program that they can in principle encode (i.e., |L_A| ≤ K).

NOTER_PAGE: (6 0.49103773584905663 . 0.19719169719169719)

If a tractable method M for solving AI-by-learning would exist, then we could use M to solve Perfect-vs-Chance tractably,

NOTER_PAGE: (7 0.3580188679245283 . 0.8217338217338217)

In practice, AIs are being continu- ously produced which are claimed to be either human-like and human-level AI or inevitably on a path leading there.

NOTER_PAGE: (8 0.4877358490566038 . 0.6794871794871795)

Concretely, this means that they make lots of errors—deviating substantially from human be- haviour

NOTER_PAGE: (8 0.5924528301886792 . 0.5482295482295483)

even if the problem may be practically solv- able for trivially simple situations (small n), any attempts to scale up to situations of real-world, human-level complexity (medium to large n) will necessarily consume an astronom- ical amount of resources

NOTER_PAGE: (8 0.6339622641509434 . 0.17338217338217338)

appear to contra- dict both intuition and experiences with existing AIs.

NOTER_PAGE: (8 0.7301886792452831 . 0.7936507936507936)

AIs appear human-like in non- rigorous tests, but the likeness is debunked when more rig- orous tests are made

NOTER_PAGE: (9 0.5801886792452831 . 0.626984126984127)

no mat- ter how much “better” AI gets, it will be off by light-years, wrong in exponentially many situations.

NOTER_PAGE: (9 0.5933962264150944 . 0.33760683760683763)

claims of abilities emerging with the scaling up of models are often revealed to be trivial products of the researcher’s choice of metric

NOTER_PAGE: (9 0.6438679245283019 . 0.5146520146520146)

models are not simply ‘well-performing’ or ‘accurate’ in the abstract but always in relation to and as quantified by some metric on some dataset”

NOTER_PAGE: (9 0.7268867924528302 . 0.3705738705738706)

if you think your AI is very human-like, then you are not testing it critically enough

NOTER_PAGE: (9 0.8580188679245283 . 0.3076923076923077)

Makeism: The view that computationalism implies that (a) it is possible to (re)make cognition computation- ally; (b) if we (re)make cognition then we can explain and/or understand it; and possibly (c) explaining and/or understanding cognition requires (re)making cognition itself.

NOTER_PAGE: (10 0.12830188679245283 . 0.5335775335775336)

Such replacements are a clear case of “map territory confusion”, and with a poor map at that. This may seem to make sense if one believes that the AIs ap- proximate human behaviour (though even then it is not a suf- ficient condition, Guest & Martin, 2023), but as we explained above the AIs do not actually approximate human behaviour.

NOTER_PAGE: (10 0.21367924528301888 . 0.25335775335775335)

I think a lot of people who are replacing humans with AIs never actually wanted humans at all, though - in fact the inhumanity is exactly what they like. that AIs don't actually approximate humans all that closely is a major advantage

Computationalism without makeism is still theoretically fruitful.

NOTER_PAGE: (12 0.2830188679245283 . 0.08424908424908426)

computationalism primar- ily aids cognitive science by providing conceptual and for- mal tools for theory development and for carefully assess- ing whether something is computationally possible or not, in principle and in practice.

NOTER_PAGE: (12 0.4674528301886792 . 0.26495726495726496)

we are dealing with mas- sive underdetermination of theory by data: i.e., if we observe behaviours consistent with a computational level theory, we cannot infer which algorithms or neural processes underlie the behaviour.

NOTER_PAGE: (13 0.7952830188679245 . 0.16910866910866912)