Talks (30 min including discussion):

Alessandro Armando:

RDL: Rewrite and Decision Procedure Laboratory


RDL is a system for formula simplification in a quantifier-free first-order logic with equality. It implements Constraint Contextual Rewriting (CCR, for short), a sound and terminating schema for the tight integration of rewriting and decision procedures. The current system takes a clause and a set of facts as input and returns a simplified version of the input clause. RDL is fully automatic. In this talk, we describe the current version of the system focusing on the relationships between the implementation and an abstract architecture derived from CCR. More information about the system and CCR can be found at the Constraint Contextual Rewriting home page

Serge Autexier:

Supporting Evolutionary Formal Software Development

Abstract: not available yet

David Basin:

Reflective Metalogical Frameworks


A metalogical framework is a logic with an associated methodology that is used to represent other logics and to reason about their metalogical properties. We propose that logical frameworks can be good metalogical frameworks when their logics support reflective reasoning and their theories always have initial models. We present a concrete realization of this idea in rewriting logic. Theories in rewriting logic always have initial models and rewriting logic is reflective. Hence we can use reflection to represent at the meta-level theories and their parameterized extensions. Moreover, we can uniformly reflect induction principles for reasoning, at the meta-level, about these theories. We show that this reflective methodology provides a rich and powerful framework for formal metatheory; one can, for example, prove metatheorems that relate theories or establish properties of parameterized classes of theories. Finally, we report on the implementation of an inductive theorem prover in the Maude system, whose designed is based on the results presented in this paper.

Alan Bundy:

On the psychological validity of induction


Human experts have the ability rapidly to assess the truth value of inductive conjectures with a high degree of accuracy. What mechanism do they use to do this? I will present evidence that they do *not* use inductive proof. I will also present two alternative mechanisms which are a better fit to the available psychological evidence (such as it is). These are: the constructive omega rule and a mechanism to support ellipsis based on a second order recursive functional. I will propose an experiment to assess the psychological validity of these rival mechanisms and ask for your advice on both the design of such experiments and the proposal of other explanatory mechanisms.

Slides of this talk are available

Christoph Benzmueller:

Proof-Planning Based on a Multi-Agent Architecture


The agent-based O-Ants system has originally been developed as a state of the art command suggestion mechanisms for interactive theorem proving. Our recent successfull application of this mechanism as the `reactive' basis of a normal-form natural deduction theorem prover demonstrates O-ANTS potential to be also employed as a concurrent, flexible, and robust basis in proof planning. Thereby O-Ants could especially support a close integration of the user and/or external reasoners in the proof planning process.

Anatoli Degtyarev:


Abstract: not available yet

Louise Dennis:

Embedding Verification in Microsoft Excel (Joint work with Graham Collins)


The aim of the PROSPER project is to allow the embedding of verification technology in existing and new applications in such a way that the theorem proving is hidden, or presented to the end user in a natural way. This paper describes a system built with the PROSPER toolkit to test whether it satisfied this aim. The system embeds a proof engine based on the HOL theorem prover within Microsoft Excel, a popular commercial spreadsheet application. The outcome was a small prototype system which could handle a few simple conjectures. While simple, the system was rapidly and simply created supporting the claim that the PROSPER toolkit allows effort to be focused on the development of customised verification procedures not on low-level communication details or reimplementation of existing procedures. This case study also serves as an example of "lightweight" theorem proving and the "invisible" use of verification.

Dieter Hutter:

Proving the correctness of abstractions by induction (Joint work with Dennis Dams and Natalia Sidorova)


We investigate the possibility of using abstract interpretation techniques in model-checking. Analysing Philips' Bounded-Retransmission Protocol and a small protocol for registration handling in mobile communication, we abstract data structures to make systems finite and further verify them with the model checker Spin. We apply the inductive theorem-prover INKA to prove the correctness of our abstractions. An inductive style of theorem proving seems to be very convenient for such proofs. We indicate the issues arising from this application for a proof tool like INKA and illustrate the proof techniques used to automate the proofs.

Andrew Ireland:

Future of Proof Planning (discussion)

Short talk as a basis for motivating a discussion on strategies and how the proof planning community can best move forward.

Ewen Maclean:

Generalisation as a critic to the induction strategy.


This talk addresses the issue of generalisation in inductive proofs. We assess the possibility of performing generalisation at the point where induction fails to apply. We use the {\em CLAM} proof-planner as the basis for the experiment and implement a set of generalisation critics which we attach to the induction method. The induction method is revised so that it only succeeds if all of the occurrences of a certain variable are rewriteable. The ripple analysis attached to the method is thus as stringent as a one-step look-ahead can be. We observe that the generalisation critics do not perform as well as the existing generalisation method on the synthesised test set, and we analyse the reason for this failure.

Heiko Mantel:

A Case Study in the Mechanical Verification of Fault Tolerance


To date, there is little evidence that modular reasoning about fault-tolerant systems can simplify the verification process in practice. We study this question using a prominent example from the fault tolerance literature: the problem of reliable broadcast in point-to-point networks opposed to crash failures of processes. The experiences from this case study show how modular specification techniques and rigorous proof re-use can indeed help in such undertakings.

Andreas Meier:

Proof Planning with Multiple Strategies


Humans have different problem solving strategies at their disposal and they can flexibly employ several strategies when solving a complex problem, whereas previous planning and theorem proving systems typically employ a single strategy or a hard coded combination of a few strategies. We introduce multi-strategy proof planning that allows to combine a number of strategies and to switch between them in a planning process. Thereby proof planning becomes more robust since it does not necessarily fail if one problem solving mechanism fails. Rather it can reason about preference of strategies and about failures. Moreover, our strategies provide a means for structuring the vast amount of knowledge such that the planner can cope with an otherwise overwhelming knowledge in mathematics. Since different strategies can employ different algorithms it is possible to combine precondition satisfaction planning with other algorithms that refine or modify a plan.

Silvio Ranise:

A Practical Extension Mechanism for Decision Procedures


The only way to meet the requirements posed by many industrial applications is to combine the expressiveness of general purpose provers with the efficiency of specialised reasoners. Unfortunately this is a surprisingly difficult task. The main problem is that only a tiny portion of the proof obligations arising in many practical applications falls exactly in the domain the specialised reasoners are designed to solve. In this paper we propose a general extension mechanism which allows decision procedures to tackle problems falling outside the scope they have been originally designed for, thereby considerably enhancing their usefulness in practical applications.

Axel Schairer:

Towards Incrementally Constructing Correct Code


Constructing programs is an incremental activity. When the programs are required to satisfy a given specification, constructing the specification and constructing the correctness proofs should go hand in hand with editing the program, and should thus be incremental as well. I will report on my work towards a system that provides support for this. Starting from ideas taken from Jon Whittle's Cynthia system I will describe the architecture of the system and the calculus, a variant of type theory tailored towards reasoning about functional programs in a natural way. - This is work in progress.

Dirk Stefan Schweitzer:

Transformation of Tail-Recursive Functions


The classical techniques for the verification of imperative programs use loop-invariants. But as invention of loop-invariants is hardly mechanizable, all systems based on these classical techniques are essentially interactive. An alternative approach for the verification of imperative programs is to translate these programs into functional programs and to use powerful existing induction theorem provers to verify the generated functional programs. But a naive translation generates tail-recursive functions which are not suitable for induction. So Juergen Giesl invented several rules to transform tail-recursive functions not suitable for induction into functions suitable for induction. In my talk i will shortly illustrate the problem of a naive translation and then i will introduce two main transformation rules.

Christoph Walther:

The VeriFun System


This talk reports on the VeriFun system, a program verifier for a (simple) functional language which is under development at the Darmstadt University of Technology. We illustrate the design goals of the project, present the calculi and heuristics underlying the implementation and report about first experiences with the system.

Daniele Zini:

The Logic Broker Architecture (joint work with Alessandro Armando and Andrew Ireland)


The Logic Broker Architecture provides the infrastructure for interconnecting stand-alone provers in a sound, principled, and transparent way. We present the key concepts of the architecture and discuss our current prototype implementation based on CORBA. As a case study we illustrate the coupling of CLaM with the simlifier of the ESC Prover.

Dieter Hutter
Last modified: Fri Mar 17 07:56:28 MET 2000