09:15 - 09:45
Joao Leite (UNL)
10:00 - 10:30
Gernot Salzer (TUW)
10:30 - 11:00
Enrico Franconi (UniBZ)
Nominated for Best Thesis Award
11:15 - 11:45
Atefeh Keshavarzi (TUW)
Argumentation, and in particular computational models of argumentation, has recently become a main topic within artificial intelligence. Although there exists a wide variety of formalisms of argumentation, one popular, prominent and simple formalism stands out, namely abstract argumentation frameworks (AFs). Although AFs are very popular tools in argumentation because of their conceptual simplicity, they are not expressive enough to define different kind of relations between arguments. Several generalizations of AFs exist, in particular, abstract dialectical frameworks (ADFs), a powerful generalization of AFs, are widely studied. ADFs are capable to express arbitrary relations between arguments beyond simple attacks. In the current work we close some gaps in existing research on ADFs. More specifically, we investigate whether some main results carry over from AFs to ADFs. For instance, we reformulate Dungs Fundamental Lemma and we study under which conditions all semantics of an ADF coincide. We also study whether particular properties which are known to hold for certain subclasses can be extended to the world of ADFs. To do so, we introduce several such classes (symmetric ADFs, acyclic ADFs, attack symmetric ADFs, acyclic support symmetric ADFs, complete ADFs) and investigate their properties. A central aspect of our work is comparing the expressivity of subclasses of AFs and ADFs from the perspective of realizability. At the end we introduce an implementation of a generator to produce such subclasses of ADFs. We use this generator in order to evaluate the effect of cycles on the performance of existing solvers for ADFs.
Nominated for Best Thesis Award
11:45 - 12:15
Satyadharma Tirtarasa (TUD)
Description logics (DLs) are arguably one of the important knowledge representation languages today. The number of the tools available are getting larger each year. The growth itself is helped by the fact that DLs are the underlying logic of the web ontology language (OWL). However, building a knowledge base is prone to errors. It is common that a knowledge base designer finds an unwanted entailment. The justification framework gives a solution to tackle this problem by giving minimal subsets that are responsible for an entailment. On the other hand, we have a non-standard fixed-domain semantics that gives a complexity advantage over expressive DLs and more intuitive models for certain cases. In this work, we look into how the justification framework and the fixed-domain semantics entangled. Furthermore, we provide two approaches, glass-box and black-box, for finding justifications for inconsistency.
13:30 - 14:00
Matti Berthold (UNL)
Strong persistence (SP) is arguably the most desirable attribute of forgetting operators for logic programs. It roughly means that no crucial information regarding the atoms that are not to be forgotten is lost. The limits of when it is possible to preserve SP while forgetting have been studied recently and classes of forgetting operators that satify SP when it is possible have been defined semantically. In my project I define a syntactical operator that acts as a representative of such a class and implement it within ASP. In my presentation I will quickly summarize forgetting in logic programming and introduce my operator.
14:00 - 14:30
Tobias Philipp (Secunet Security Networks AG)
Bugs have been part of software development since the first computers were developed. In fact, about 15 - 50 errors occur per 1000 lines in industrial average. In order to develop large, high integrity software, we promote to separate complex systems into untrusted and trusted components. For the trusted components, we use SPARK – a formally analyzable subset of the programming language Ada 2012 – and tools that bring logics-based confidence to software verification such as automated and interactive theorem provers. In particular, the x86/64 separation kernel Muen was developed in SPARK and absence of runtime-errors have been shown. The resulting systems are efficient in practice and the formal verification of relatively small, trusted components is feasible.
In this talk, I will give a short introduction into SPARK and give an outlook on how we use theorem provers to guarantee the absence of runtime-errors and proof functional correctness.
14:30 - 15:00
15:30 - 16:00
Manuel Bodirsky (TUD)
In a constraint satisfaction problem (CSP) we are given a finite set of variables and a finite set of constraints, and the task is to decide whether there is an assignment of values to the variables that satisfies all the constraints. This talk is about the computational complexity of such problems when the values are the rational numbers and the constraints are piecewise linear relations. When all the constraints are additionally convex, then the CSP can be solved in polynomial time, for instance with the celebrated ellipsoid method. In this talk I will show that the CSP can also be solved in polynomial time when the constraints satisfy a different closure condition, namely preservation under the componentwise application of the median operation. This new class of constraints includes all linear inequalities with at most two variables, but also many interesting non-convex relations. Our class is maximally tractable in the sense that adding any other constraint relation to the class results in an NP-hard CSP over the rationals. I close with some CSPs over the rational numbers whose computational complexity is open.
16:00 - 16:30
Andrea Condoluci (University of Bologna)
The λ-calculus is a handy formalism to specify the evaluation of higher-order programs. It is not very handy, however, when one interprets the specification as an execution mechanism, because terms can grow exponentially with the number of β-steps. This is why implementations of functional languages and proof assistants always rely on some form of sharing of subterms. These frameworks however do not only evaluate λ-terms, they also have to compare them for equality. In presence of sharing, one is actually interested in equality (or more precisely α-conversion) of the underlying unshared λ-terms. The literature contains algorithms for such a sharing equality, that are polynomial in the sizes of the shared terms. In the joint work with Accattoli and Sacerdoti Coen that we have just submitted to LICS 2018, we improve the bounds in the literature by presenting the first linear-time algorithm. As others before us, we are inspired by Paterson and Wegman’s algorithm for first-order unification, itself based on representing terms with sharing as DAGs, and sharing equality as bisimulation of DAGs. In particular, we show that the algorithm computes the smallest bisimulation between the given DAGs, if any.
09:00 - 09:30
Christian Al-rabbaa (TUD)
Answer Set Programing is a form of declarative programing rooted in logic programming and non-monotonic reasoning; a popular approach in knowledge representation and reasoning. Handling logic programs with a large number of Answer Sets is always a challenging task. The topic of this talk will be about my master's thesis. This work is about attaining an efficient methodology to navigate effectively and quickly in the space of answer sets. Using the correlation between ASP and product configuration, we are going to project our findings into a general product configurator. The importance of the configuration mechanism as a way to avoid costly mistakes and as a fashion to help companies to tend -- to some extent -- to the unique requirements demanded by the numerous customers, plus its close correspondence to ASP are the reasons behind choosing this interesting field to incorporate our claims. The ASP paradigm will be used to represent some product in hand and to navigate through all its possible configurations. Furthermore, we will go beyond the classical behavior of a configurator and provide configuration fixes that permit the selection of conflict causing options.
09:30 - 10:00
Existential rules (also known as tuple-generating dependencies) are a powerful rule language that is of interest to databases and knowledge representation alike. In this presentation, I will discuss the challenge that arise when reasoning with such rules, and I present recent theoretical and practical results for tackling this problem. The restricted chase is a well-known sound and complete reasoning algorithm for existential rules, but it may fail to terminate (unavoidable, since reasoning is undecidable here). I review recently proposed conditions that ensure restricted chase termination, and give empirical evidence that these conditions apply to many real-world ontologies. Moreover, I discuss a new implementation of the restricted chase in the column-based rule engine VLog, and present extensive evaluation results that show that VLog can compete with the state of the art in existential rule reasoning regarding runtime, scalability, and memory efficiency.
10:30 - 11:00
Adrián Rebola-Pardo (TUW)
Until recently, propositional proof complexity was a stable, if incomplete, field: all the proof systems that were suspected to be the most efficient were polynomially equivalent, including Extended Resolution, Frege and Extended Frege. The newly proposed proof systems DRAT and PR, stemming from the SAT solving community, posed a problem: while they easily simulate all these proof systems, the converse was unknown. Were this not the case, this would open a new path to solve the question whether coNP = NP, a necessary condition for NP = P. I will present recent results by Biere, Heule, Kiesl, and myself showing that the simulation holds, and so coNP = NP remains just as uncertain as always has been.
11:00 - 11:30
The low quality of scanned documents prevents one from recognizing the contents of them. However, often there are only a few important words that need to be distinguished. Spotting these words is a complicated task, which can be accomplished by utilizing the underlying document structure. The structure of the documents enables one to tackle the problem with the help of machine learning. This thesis investigates the boundaries of the object detection problem; however, unlike the previous research, we apply the detection pipeline to discover the words in structured documents.