Change search
Refine search result
1 - 5 of 5
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1. Ferme, Eduardo
    et al.
    Rott, H.
    Revision by comparison2004In: Artificial Intelligence, ISSN 0004-3702, E-ISSN 1872-7921, Vol. 157, no 1-2, p. 5-47Article in journal (Refereed)
    Abstract [en]

    Since the early 1980s, logical theories of belief revision have offered formal methods for the transformation of knowledge bases or "corpora" of data and beliefs. Early models have dealt with unconditional acceptance and integration of potentially belief-contravening pieces of information into the existing corpus. More recently, models of "non-prioritized" revision were proposed that allow the agent rationally to refuse to accept the new information. This paper introduces a refined method for changing beliefs by specifying constraints on the relative plausibility of propositions. Like the earlier belief revision models, the method proposed is a qualitative one, in the sense that no numbers are needed in order to specify the posterior plausibility of the new information. We use reference beliefs in order to determine the degree of entrenchment of the newly accepted piece of information. We provide two kinds of semantics for this idea, give a logical characterization of the new model, study its relation with other operations of belief revision and contraction, and discuss its intuitive strengths and weaknesses.

  • 2.
    Hanheide, Marc
    et al.
    University of Lincoln.
    Göbelbecker, Moritz
    University of Freiburg.
    Horn, Graham S.
    University of Birmingham.
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. krsj@kth.se.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Gretton, Charles
    University of Birmingham.
    Dearden, Richard
    University of Birmingham.
    Janicek, Miroslav
    DFKI, Saarbrücken.
    Zender, Hendrik
    DFKI, Saarbrücken.
    Kruijff, Geert-Jan
    DFKI, Saarbrücken.
    Hawes, Nick
    University of Birmingham.
    Wyatt, Jeremy
    University of Birmingham.
    Robot task planning and explanation in open and uncertain worlds2015In: Artificial Intelligence, ISSN 0004-3702, E-ISSN 1872-7921Article in journal (Refereed)
    Abstract [en]

    A long-standing goal of AI is to enable robots to plan in the face of uncertain and incomplete information, and to handle task failure intelligently. This paper shows how to achieve this. There are two central ideas. The first idea is to organize the robot's knowledge into three layers: instance knowledge at the bottom, commonsense knowledge above that, and diagnostic knowledge on top. Knowledge in a layer above can be used to modify knowledge in the layer(s) below. The second idea is that the robot should represent not just how its actions change the world, but also what it knows or believes. There are two types of knowledge effects the robot's actions can have: epistemic effects (I believe X because I saw it) and assumptions (I'll assume X to be true). By combining the knowledge layers with the models of knowledge effects, we can simultaneously solve several problems in robotics: (i) task planning and execution under uncertainty; (ii) task planning and execution in open worlds; (iii) explaining task failure; (iv) verifying those explanations. The paper describes how the ideas are implemented in a three-layer architecture on a mobile robot platform. The robot implementation was evaluated in five different experiments on object search, mapping, and room categorization.

  • 3.
    Hansson, Sven Ove
    KTH, School of Architecture and the Built Environment (ABE), Philosophy and History of Technology, Philosophy.
    Relations of epistemic proximity for belief change2014In: Artificial Intelligence, ISSN 0004-3702, E-ISSN 1872-7921, Vol. 217, p. 76-91Article in journal (Refereed)
    Abstract [en]

    Relations of epistemic proximity are closely related to relations of epistemic entrenchment, but contrary to the latter they do not refer to sentences but to belief patterns that are expressed with a metalinguistic belief predicate B. Hence -Bp > B-q means that disbelief in p is more close at hand (obtainable with less far-reaching changes in belief) than belief in not-q. The logic of epistemic proximity is investigated, and it is used to construct a uniform operation of belief change that has the standard operations as special cases, specified with success conditions such as Bp for revision and {-Bp(1),...,-Bp(n)} for multiple contraction. Standard entrenchment relations are obtained by defining p to be less entrenched than q if and only if -Bp > -Bq.

  • 4.
    Hansson, Sven Ove
    KTH, School of Architecture and the Built Environment (ABE), Philosophy and History of Technology, Philosophy.
    The co-occurrence test for non-monotonic inference2016In: Artificial Intelligence, ISSN 0004-3702, E-ISSN 1872-7921, Vol. 234, p. 190-195Article in journal (Refereed)
    Abstract [en]

    According to the co-occurrence test, q is (non-monotonically) inferrible from p if and only if q holds in all the reasonably plausible belief change outcomes in which p holds. A formal model is introduced that contains representations of both the co-occurrence test (for non-monotonic inference) and the Ramsey test (for conditionals). In this model, (non-nested) conditionals and non-monotonic inference satisfy the same logical principles. However, in spite of this similarity the two notions do not coincide. They should be carefully distinguished from each other.

  • 5.
    Sandewall, Erik
    KTH, School of Education and Communication in Engineering Science (ECE), Department for Library services, Language and ARC, Publication Infrastructure.
    Defeasible inheritance with doubt index and its axiomatic characterization2010In: Artificial Intelligence, ISSN 0004-3702, E-ISSN 1872-7921, Vol. 174, no 18, p. 1431-1459Article in journal (Refereed)
    Abstract [en]

    This article introduces and uses a representation of defeasible inheritance networks where links in the network are viewed as propositions, and where defeasible links are tagged with a quantitative indication of the proportion of exceptions, called the doubt index. This doubt index is used for restricting the length of the chains of inference. The representation also introduces the use of defeater literals that disable the chaining of subsumption links. The use of defeater literals replaces the use of negative defeasible inheritance links, expressing "most A are not B". The new representation improves the expressivity significantly. Inference in inheritance networks is defined by a combination of axioms that constrain the contents of network extensions, a heuristic restriction that also has that effect, and a nonmonotonic operation of minimizing the set of defeater literals while retaining consistency. We introduce an underlying semantics that defines the meaning of literals in a network, and prove that the axioms are sound with respect to this semantics. We also discuss the conditions for obtaining completeness. Traditional concepts, assumptions and issues in research on nonmonotonic or defeasible inheritance are reviewed in the perspective of this approach.

1 - 5 of 5
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf