Hackpads are smart collaborative documents. .

Doktor Towerstein

417 days ago
Unfiled. Edited by Doktor Towerstein 417 days ago
Doktor T Acts of Speech, according to FIPA and KQML
Both FIPA and KQML specify some "acts of speech" to be used in agent communication. Let's gather some refs here, summarize their info, and select a small subset that is still useful.
  • FIPA
Notes: FP stands for "feasibility preconditions" (conditions to be satisified for the act to be planned) and RE stands for "rational effects" (the reason for which the act is selected).
The defined speech acts are:
  • Accept Proposal: "accept-proposal is a general-purpose acceptance of a proposal that was previously submitted (typically through a propose act). The agent sending the acceptance informs the receiver that it intends that (at some point in the future) the receiving agent will perform the action, once the given precondition is, or becomes, true. The proposition given as part of the acceptance indicates the preconditions that the agent is attaching to the acceptance. A typical use of this is to finalize the details of a deal in some protocol. For example, a previous offer to “hold a meeting anytime on Tuesday” might be accepted with an additional condition that the time of the meeting is 11.00." Seems appropriate to a negotiation. May be formalized like, "if both your preconditions and my preconditions are met at some point in time, I will act as indicated in the sentence". This may lead to additional beliefs about the future depending on the agent's modelling of the action postconditions (which might not be the same for all agents). 
<i, accept-proposal (j, <j, act>, φ))> ≡
<i, inform (j, Ii Done (<j, act>, φ))>
FP: Bi α ∧ ¬Bi (Bifj α ∨ Uifj α)
RE: Bj α
α = Ii Done (<j, act>, φ)
  • Agree: similar to proposal
<i, agree (j, <i, act>, φ))> ≡
<i, inform (j, Ii Done (<i, act>, φ))>
FP: Bi α ∧ ¬Bi (Bifj α ∨ Uifj α)
RE: Bj α
α = Ii Done(<i, act>, φ)
  • Cancel
  • Call for Proposal
  • Confirm
  • Disconfirm
  • Failure
  • Inform
  • Inform If
  • Inform Ref
  • Not Understood
  • Propagate
  • Propose
  • Proxy
  • Query If
  • Query Ref
  • Refuse
  • Reject Proposal
  • Request
  • Request When
  • Request Whenever
  • Subscribe
KQML (please help me promote to heading1!! :'( )
658 days ago
Unfiled. Edited by Doktor Towerstein , Claudia Doppioslash 658 days ago
Doktor T Questions on mental states
m4farrel: Must stories be a sequence of *logically* connected moments? What about, if, somebody with a mental illness was writing a story in first person? Would a first person account of a mental health story still be logical,  w.r.t. some logic, but perhaps derived from a set of beliefs that don't reflect  reality, etc? Mental health episode, I guess.
Your favourite psychologist answers: Not necessarily. There are several ways in which a mind may go wrong. It may reason perfectly, but take implausible assumptions that nobody would think about ("The box disappeared because the aliens took it away. Of course the door is closed: they used their high-dimensional pathways!"). The logic may be just flawed, not following any rules of logic ("The aliens took the box. Ghosts are just too busy fabricating lies."). Maybe the basic facts are true, but the subject is unable to focus and tell a consistent story ("They threw it away. He gave them the keys. Oh, my, he wanted the box. But it is now lost in the gutter. So they opened the door at night!"). I will try and find some more descriptions of accounts for the mentally ill.
698 days ago
Unfiled. Edited by Claudia Doppioslash , Doktor Towerstein 698 days ago
Claudia D [Discussion moved to Discussions]
I made a discussions tag, and a Discussions page, we can use it as a chat, and then maybe make new discussion pages when it's too long.
708 days ago
Unfiled. Edited by Doktor Towerstein 708 days ago
Doktor T
  • Found this paper this month... Will need to learn Coq!!!!! :D
708 days ago
Unfiled. Edited by Doktor Towerstein 708 days ago
We should be able to have "character development arcs", where the mind model of the characters actually change/evolve. Maybe Dynamic Epistemic Logic could be used for that?]
Doktor T The trickiest part is, in my opinion, "a character should have something s/he wants". One might describe this as CTL* logic  "IN_EVERY_BRANCH_IN_THE_FUTURE((has Character Object) ^ (wants Character Object))" (which will happen at some point in time, maybe at different times in different branches). But do we define "want" as a primitive, or as some sort of derived concept? I guess that "wanting something" is equivalent to "always planning actions so that the foreseeable result is having something", and "planning" is equivalent to "guiding an action to a certain outcome". I think that the DEL extension for planning includes some kind of logic of actions as well as the epistemic modality, so you can express sentences about actions (in a turn-game-like fashion). But I don't know if anyone merges DEL with CTL* to provide a richer temporal expression.
708 days ago
Unfiled. Edited by Doktor Towerstein 708 days ago
Doktor T Modal Logics
Here are a couple of good papers on dynamic epistemic logic and other useful modal logics:
Belief revision with plausibility values for states and actions:
A nice course on Dynamic Epistemic Logic using... Haskell!!!
History of Dynamic Epistemic Logic:
Planning based on a temporal logic (maybe belongs to planning?):
How to win at Prisonner's Dilemma by lying at your partner in crime:
I don't personally like BDI logics, but the idea in this paper is good:
Implicit coordinated plans for multiple agents (recommended but not really read):
Planning based on the DEL formalism:
False-belief tasks in DEL:
Another formalism: Dynamic Doxastic Logic (which is somewhat harder than DEL since beliefs can be wrong):
Two theorem provers for modal logics:

Contact Support

Please check out our How-to Guide and FAQ first to see if your question is already answered! :)

If you have a feature request, please add it to this pad. Thanks!

Log in