Skip to content

Releases: PtrMan/20NAR1

Release 0.1.7

21 Aug 15:08
Compare
Choose a tag to compare
  • add procedural var intro
  • add some vision modules

Release 0.1.5

24 Apr 14:46
Compare
Choose a tag to compare

various small fixes

procedural

  • evidence depends on op, gets rid of global evidence hack which was necessary to play TTT effectively

Release 0.1.4

14 Jan 19:50
Compare
Choose a tag to compare
  • resource distribution is now max utility based in procedural reasoner, system decides on runtime how much CPU to spend for some derivations

Release 0.1.2

14 Jan 18:35
Compare
Choose a tag to compare

small updates which make it into a stable version

implementation:

  • changed SentenceDummy to Sentence (name is now stable API)

Release 0.1.0

25 Dec 22:32
Compare
Choose a tag to compare

bumping higher release because it seems to be stable and capable enough to do it now before more experimental changes are done again

Release 0.0.30

22 Dec 00:59
Compare
Choose a tag to compare
  • switched attention to better attention heavily inspired by ONA

Release 0.0.26

21 Dec 21:06
Compare
Choose a tag to compare
  • ExtInt & is supported

Release 0.0.24

20 Dec 15:55
Compare
Choose a tag to compare
  • added automatic evaluation of Q&A
  • eval: added some examples for evaluation
  • Q&A is a bit better

Release 0.0.22

14 Dec 11:23
Compare
Choose a tag to compare
  • fixed deadlock
  • complexity of declarative judgements is capped to put under holy AIKR

known bugs:

  • small bug in parser - it can accept partial lines

known problems:

  • revision of temporal beliefs isn't done under AIKR

Release 0.0.20

12 Dec 03:51
Compare
Choose a tag to compare

stable because Pong3 and TTT work

evaluation result:

TTT:
w/l ratio = 2.1875 games = 500
@ max # goals = 1000

known bugs:

  • the declarative reasoner causes a deadlock, because the mutexes for the concepts are locked incorrectly
  • small bug in parser - it can accept partial lines

known problems:

  • term complexity isn't checked before putting beliefs into the system, which puts the system out of AIKR if the outside system feeds it with extremely complicated terms or if the reasoner uses derivations, which produce more complex conclusions (there is only one rule which isn't used in the NLP example(s))
  • revision of temporal beliefs isn't done under AIKR