Turing Tests (Chess)

One of these sets of Chess moves represents a match between two human agents; the other has at least one machine agent as a player.

a) 1. e4 e5 2. Nf3 Nc6 3. Bb5 Nf6 4. d3 Bc5 5.Bxc6 dxc6 6. Nbd2 Bg4 7. h3 Bh5 8. Nf1 Nd7 9. Ng3 Bxf3 10. Qxf3 g6 11. Be3 Qe7 12. 0-0-0 0-0-0 13. Ne2 Rhe8 14. Kb1 b6 15. h4 Kb7 16. h5 Bxe3 17. Qxe3 Nc5 18. hxg6 hxg6 19. g3 a5 20. Rh7 Rh8 21. Rdh1 Rxh7 22. Rxh7 Qf6 23. f4 Rh8 24. Rxh8 Qxh8 25. fxe5 Qxe5 26. Qf3 f5 27. exf5 gxf5 28. c3 Ne6 29. Kc2 (diagram) Ng5 30. Qf2 Ne6 31. Qf3 Ng5 32. Qf2 Ne6 ½–½

b) 1. Nf3 Nf6 2. d4 e6 3. c4 b6 4. g3 Bb7 5. Bg2 Be7 6. O-O O-O 7. d5 exd5 8. Nh4 c6 9. cxd5 Nxd5 10. Nf5 Nc7 11. e4 d5 12. exd5 Nxd5 13. Nc3 Nxc3 14. Qg4 g6 15. Nh6+ Kg7 16. bxc3 Bc8 17. Qf4 Qd6 18. Qa4 g5 19. Re1 Kxh6 20. h4 f6 21. Be3 Bf5 22. Rad1 Qa3 23. Qc4 b5 24. hxg5+ fxg5 25. Qh4+ Kg6 26. Qh1 Kg7 27. Be4 Bg6 28. Bxg6 hxg6 29. Qh3 Bf6 30. Kg2 Qxa2 31. Rh1 Qg8 32. c4 Re8 33. Bd4 Bxd4 34. Rxd4 Rd8 35. Rxd8 Qxd8 36. Qe6 Nd7 37. Rd1 Nc5 38. Rxd8 Nxe6 39. Rxa8 Kf6 40. cxb5 cxb5 41. Kf3 Nd4+ 42. Ke4 Nc6 43. Rc8 Ne7 44. Rb8 Nf5 45. g4 Nh6 46. f3 Nf7 47. Ra8 Nd6+ 48. Kd5 Nc4 49. Rxa7 Ne3+ 50. Ke4 Nc4 51. Ra6+ Kg7 52. Rc6 Kf7 53. Rc5 Ke6 54. Rxg5 Kf6 55. Rc5 g5 56. Kd4 1-0

The answer is below.

Continue reading “Turing Tests (Chess)”

Talk: Evolution of Memory

I presented the following at University of Maryland, College Park, 30 Oct 2017. It summarises three papers with constructive feedback on where to improve their methodology.

The bottom line is simple: we know memory is fallible and that we evolved this sort of memory mechanism rather than just a purely rigidly veridical mechanism — the question is why evolve a seemingly imperfect mechanism?

Abstract: The following three approaches show that updating information in novel situations (rather than a well-defined niche) differentiates the distinctly human form of memory from that which non-human agents possess: we need to update information as time passes and as social arrangements change (not so much the environment in which we must survive and reproduce, but rather, in the uniquely human terrain or social landscape, ie. regarding what is “due” others as well as, or, more importantly, what is “due” us in particular). Rigid memory serves us well (and we seem to possess this just as non-human animals do, in cases such as locating resources ); but it breaks down in social interactions when we must perform so-called moral book-keeping to disentangle our ever-changing social obligations (and, more saliently, what others owe us — as human memory has an ego-driven, self-knowing, meta-representational character).

Continue reading “Talk: Evolution of Memory”

The Generality Constraint [Draft]

Gareth Evans, in The Varieties of Reference, described “thought” in functional and structural terms. He called this the generality constraint. This small article shall gather some formulations of the generality constraint that offer some suggestions on where it can be used elsewhere in and outside of Philosophy.

  • Evans’ definition.
  • Carruthers’ weak&strong formulations.
  • Some considerations such as Camp’s claim that categorial restrictions are not allowed; and my own counter-claim that a basic notion of extensibility does away with the need for such restrictions.
  • Lastly, how to actually use the generality constraint beyond argue over what Evans meant. This includes future directions and applying generality constraint elsewhere outside of Philosophy: type theory for computation, natural language processing, X-Phil tests, cross-cultural associations, cog sci (autism, developmental psych) and so on.

From The Varieties of Reference:
Generality Constraint
(Unrestricted): If an agent can think the thought “A is an F,” and the agent can think the thought “B is a G,” then the agent can think the thoughts “B is an F” and “A is a G.”

Two interpretations of the constraint, via Peter Carruthers in “Invertebrate Concepts Confront the Generality Constraint (and Win)” are as follows:
Strong Generality Constraint: If an agent possesses the concepts A and F (and is capable of thinking “A is an F”), then for all (or almost all) other concepts B and G that the agent could possess, it is metaphysically possible for the agent to think “A is a G,” and in the same sense possible for it to think “B is an F.”
Weak Generality Constraint: If an agent possesses the concepts A and F (and is capable of thinking “A is an F”), then for some other concepts B and G that the agent could possess, it is metaphysically possible for the agent to think “A is a G,” and in the same sense possible for it to think “B is an F.”

(as a work in progress, this will be updated as time permits)

An Introduction

This site encompasses some of my research as a graduate student in Logic, Computer Science, and Mathematics. My interests include computational mathematics, conceptual modeling, feature engineering, and explainable/ethical artificial intelligence.

As time permits I shall publish research material and works in progress. This may include topics in Cognitive Systems (Organic and Machine Learning), Model Theory, and Computational Complexity, but I may also share some anecdotes about methodology as my lab work progresses. Additionally, my previous and forth-coming publications/collaborations shall reside here.