Tesis de Doctorado - FIng

The CIC is a type theory, in brief, a higher order logic in which the ...... Having chosen a concrete representation for the objects in the specification, everything.
971KB Größe 38 Downloads 95 vistas
PEDECIBA Informática Instituto de Computación – Facultad de Ingeniería Universidad de la República Montevideo, Uruguay

Tesis de Doctorado en Informática

Formal analysis of security models for mobile devices, virtualization platforms, and domain name systems

Carlos Luna 2014

Formal analysis of security models for mobile devices, virtualization platforms, and domain name systems Carlos Luna ISSN 0797-6410 Tesis de Doctorado en Informática Reporte Técnico RT 14-11 PEDECIBA Instituto de Computación – Facultad de Ingeniería Universidad de la República. Montevideo, Uruguay, 2014

´ PEDECIBA INFORMATICA ´ n, Facultad de Ingenier´ıa Instituto de Computacio ´ blica Universidad de la Repu Montevideo, Uruguay

TESIS DE DOCTORADO ´ EN INFORMATICA Formal analysis of security models for mobile devices, virtualization platforms, and domain name systems Carlos Luna [email protected] Agosto de 2014 Director acad´emico y de tesis: Gustavo Betarte Tribunal Nazareno Aguirre Departamento de Computaci´on, Facultad de Ciencias Exactas, F´ısico-Qu´ımicas y Naturales, Universidad Nacional de R´ıo Cuarto, Argentina (Revisor) Ana Bove Department of Computing Science and Engineering, Chalmers University of Technology, Sweden / PEDECIBA Inform´atica Eduardo Gramp´ın Instituto de Computaci´on, Facultad de Ingenier´ıa, Universidad de la Rep´ ublica, Uruguay / PEDECIBA Inform´atica Alvaro Mart´ın Instituto de Computaci´on, Facultad de Ingenier´ıa, Universidad de la Rep´ ublica, Uruguay / PEDECIBA Inform´atica (Presidente) Tamara Rezk INRIA Sophia Antipolis-Mediterran´ee, France (Revisora)

ii

Formal analysis of security models for mobile devices, virtualization platforms, and domain name systems Luna, Carlos ISSN 0797-6410 Tesis de Doctorado en Inform´atica Reporte T´ecnico RT -2014 PEDECIBA Instituto de Computaci´on - Facultad de Ingenier´ıa Universidad de la Rep´ ublica Montevideo, Uruguay, Agosto de 2014

´ s; Caminante, son tus huellas el camino y nada ma caminante, no hay camino, se hace camino al andar...

Antonio Machado – Proverbios y Cantares; Campos de Castilla (1912)

Formal analysis of security models for mobile devices, virtualization platforms, and domain name systems Abstract

In this thesis we investigate the security of security-critical applications, i.e. applications in which a failure may produce consequences that are unacceptable. We consider three areas: mobile devices, virtualization platforms, and domain name systems. The Java Micro Edition platform defines the Mobile Information Device Profile (MIDP) to facilitate the development of applications for mobile devices, like cell phones and PDAs. In this work we first study and compare formally several variants of the security model specified by MIDP to access sensitive resources of a mobile device. Hypervisors allow multiple guest operating systems to run on shared hardware, and offer a compelling means of improving the security and the flexibility of software systems. In this thesis we present a formalization of an idealized model of a hypervisor, and we establish (formally) that the hypervisor ensures strong isolation properties between the different operating systems, and guarantees that requests from guest operating systems are eventually attended. We show also that virtualized platforms are transparent, i.e. a guest operating system cannot distinguish whether it executes alone or together with other guest operating systems on the platform. The Domain Name System Security Extensions (DNSSEC) is a suite of specifications that provides origin authentication and integrity assurance services for DNS data. We finally present a minimalistic specification of a DNSSEC model which provides the grounds needed to formally state and verify security properties concerning the chain of trust of the DNSSEC tree. We develop all our formalizations in the Calculus of Inductive Constructions —formal language that combines a higher-order logic and a richly-typed functional programming language— using the Coq proof assistant.

Keywords: Formal modelling, Security properties, Coq proof assistant, JME-MIDP, Virtualization, DNSSEC.

viii

´ lisis formal de Ana modelos de seguridad para ´ viles, dispositivos mo ´n y plataformas de virtualizacio sistemas de nombres de dominio Resumen En esta tesis investigamos la seguridad de aplicaciones de seguridad cr´ıticas, es decir aplicaciones en las cuales una falla podr´ıa producir consecuencias inaceptables. Consideramos tres ´ areas: dispositivos m´ oviles, plataformas de virtualizaci´ on y sistemas de nombres de dominio. La plataforma Java Micro Edition define el Perfil para Dispositivos de Informaci´ on M´ oviles (MIDP) para facilitar el desarrollo de aplicaciones para dispositivos m´ oviles, como tel´ efonos celulares y asistentes digitales personales. En este trabajo primero estudiamos y comparamos formalmente diversas variantes del modelo de seguridad especificado por MIDP para acceder a recursos sensibles de un dispositivo m´ ovil. Los hipervisores permiten que m´ ultiples sistemas operativos se ejecuten en un hardware compartido y ofrecen un medio para establecer mejoras de seguridad y flexibilidad de sistemas de software. En esta tesis formalizamos un modelo de hipervisor y establecemos (formalmente) que el hipervisor asegura propiedades de aislamiento entre los diferentes sistemas operativos de la plataforma, y que las solicitudes de estos sistemas son atendidas siempre. Demostramos tambi´ en que las plataformas virtualizadas son transparentes, es decir, que un sistema operativo no puede distinguir si ejecuta s´ olo en la plataforma o si lo hace junto con otros sistemas operativos. Las Extensiones de Seguridad para el Sistema de Nombres de Dominio (DNSSEC) constituyen un conjunto de especificaciones que proporcionan servicios de aseguramiento de autenticaci´ on e integridad de origen de datos DNS. Finalmente, presentamos una especificaci´ on minimalista de un modelo de DNSSEC que proporciona los fundamentos necesarios para formalmente establecer y verificar propiedades de seguridad relacionadas con la cadena de confianza del ´ arbol de DNSSEC. Desarrollamos todas nuestras formalizaciones en el C´ alculo de Construcciones Inductivas —lenguaje formal que combina una l´ ogica de orden superior y un lenguaje de programaci´ on funcional tipado— utilizando el asistente de pruebas Coq.

Palabras clave: Modelos formales, Propiedades de seguridad, Asistente de pruebas Coq, JME-MIDP, Virtualizaci´ on, DNSSEC.

Contents 1 Introduction 1.1 Security policies . . . . . . . . . . . . . . . . . . . . . 1.2 Security models . . . . . . . . . . . . . . . . . . . . . . 1.3 Reasoning about implementations and models . . . . . 1.4 Formal analysis of security models for critical systems 1.4.1 Mobile devices . . . . . . . . . . . . . . . . . . 1.4.2 Virtualization platforms . . . . . . . . . . . . . 1.4.3 Domain name systems . . . . . . . . . . . . . . 1.4.4 Personal contributions . . . . . . . . . . . . . . 1.5 Formal language used . . . . . . . . . . . . . . . . . . 1.5.1 Coq . . . . . . . . . . . . . . . . . . . . . . . . 1.5.2 Notation . . . . . . . . . . . . . . . . . . . . . . 1.6 Document outline . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

3 3 4 5 5 5 7 8 9 11 11 11 13

2 Formal analysis of security models for mobile devices 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 MIDP security model . . . . . . . . . . . . . . . . . . . 2.1.2 Contents of the chapter . . . . . . . . . . . . . . . . . . 2.1.3 Formalization . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Formally verifying security properties of MIDP 2.0 . . . . . . . 2.2.1 Formalization of the MIDP 2.0 security model . . . . . 2.2.2 Verification of security properties . . . . . . . . . . . . . 2.2.3 Refinement . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 A certified access controller for MIDP 2.0 . . . . . . . . . . . . 2.3.1 Formalization of the access control module . . . . . . . 2.3.2 Verification of security properties . . . . . . . . . . . . . 2.3.3 A certified access controller . . . . . . . . . . . . . . . . 2.4 Formally verifying security properties of MIDP 3.0 . . . . . . . 2.4.1 Security at the application level . . . . . . . . . . . . . . 2.4.2 New device state . . . . . . . . . . . . . . . . . . . . . . 2.4.3 Application level access request . . . . . . . . . . . . . . 2.4.4 Conservative extension of security properties . . . . . . 2.4.5 Weaknesses of the security mechanisms of MIDP 3.0 . . 2.5 A framework for defining and comparing access control policies 2.5.1 An alternative security model for mobile devices . . . . 2.5.2 A framework for access control modeling . . . . . . . . . 2.5.3 Permission grant policies . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

15 15 15 17 17 17 18 26 28 30 30 32 33 36 36 36 37 38 40 42 42 43 49

i

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

ii

CONTENTS

2.6 2.7

2.5.4 Relating permission grant policies . . . . . . . . . . . . . . . . . Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3 Formally verifying security properties in an idealized alization 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 A primer on virtualization . . . . . . . . . . . . . 3.1.2 Contents of the chapter . . . . . . . . . . . . . . 3.1.3 Formalization . . . . . . . . . . . . . . . . . . . . 3.2 The model . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Informal overview of the memory model . . . . . 3.2.2 Formalizing states . . . . . . . . . . . . . . . . . 3.2.3 Actions . . . . . . . . . . . . . . . . . . . . . . . 3.2.4 Traces . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Isolation properties . . . . . . . . . . . . . . . . . . . . . 3.4 Availability . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Extension of the model with cache and TLB . . . . . . . 3.5.1 Cache management . . . . . . . . . . . . . . . . . 3.5.2 States . . . . . . . . . . . . . . . . . . . . . . . . 3.5.3 Actions semantics . . . . . . . . . . . . . . . . . 3.6 Verified implementation . . . . . . . . . . . . . . . . . . 3.6.1 Error management . . . . . . . . . . . . . . . . . 3.6.2 Executable specification . . . . . . . . . . . . . . 3.6.3 Soundness . . . . . . . . . . . . . . . . . . . . . . 3.7 Isolation and transparency in the extended model . . . . 3.7.1 OS isolation . . . . . . . . . . . . . . . . . . . . . 3.7.2 OS isolation in execution traces . . . . . . . . . . 3.7.3 Transparency . . . . . . . . . . . . . . . . . . . . 3.8 Related work . . . . . . . . . . . . . . . . . . . . . . . . 3.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 4 A formal specification of the DNSSEC model 4.1 Introduction . . . . . . . . . . . . . . . . . . . . 4.1.1 A primer on the vulnerabilities of DNS 4.1.2 Contents of the chapter . . . . . . . . . 4.1.3 Formalization . . . . . . . . . . . . . . . 4.2 Formalization of the DNSSEC model . . . . . . 4.2.1 Formalizing states . . . . . . . . . . . . 4.2.2 Events . . . . . . . . . . . . . . . . . . . 4.3 Verification of security properties . . . . . . . . 4.3.1 An invariant of one-step execution . . . 4.3.2 Compromising the chain of trust . . . . 4.4 Summary . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

50 52 53

model of virtu. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

55 55 56 56 56 57 57 58 61 63 64 67 68 68 68 70 71 71 72 75 77 77 79 80 82 83

. . . . . . . . . . .

85 85 86 87 87 87 87 93 97 97 98 100

CONTENTS

iii

5 Conclusion and future work 101 5.1 Security models for mobile devices . . . . . . . . . . . . . . . . . . . . . 101 5.2 A model of virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . 102 5.3 The DNSSEC model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Bibliography

105

List of Figures 2.1

2.7 2.8

A typical MIDlet descriptor, specifying the required and optional permission of suite for a suite with two applications . . . . . . . . . . . . . . . Specification of event request when user intervention is not required . . Specification of event request when intervention is required and the user accepts the request . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Specification of event request when intervention is required and the user denies the request . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A session for a suite with identifier id . . . . . . . . . . . . . . . . . . . Simulation relation between interp and the relation ֒→. Given states s, s and events e, e such that s ⊑ s and e ⊑ e, for every state s′ and response r computed by interp there must exist a corresponding abstract state s′ refined by s′ reachable from s by the ֒→ relation with the same response Semantics of instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . Control-flow graph example . . . . . . . . . . . . . . . . . . . . . . . . .

3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10

Memory model of the platform . . . . . . Formal specification of actions semantics . Memory model of the platform with cache Axiomatic specification of action write . The step function . . . . . . . . . . . . . . Execution of write action . . . . . . . . . Validation of write action precondition . Effect of write execution . . . . . . . . . Cache update . . . . . . . . . . . . . . . . Formal specification of the action silent

. . . . . . . . . .

57 60 69 70 73 73 74 76 76 81

4.1 4.2 4.3

Types of Resource Records . . . . . . . . . . . . . . . . . . . . . . . . . Resource records introduced by DNSSEC . . . . . . . . . . . . . . . . . DNS Message Format . . . . . . . . . . . . . . . . . . . . . . . . . . . .

88 89 90

5.1 5.2 5.3

LOC of Coq development – security for mobile devices . . . . . . . . . . LOC of Coq development – model of virtualization . . . . . . . . . . . . LOC of Coq development – DNSSEC formalization . . . . . . . . . . . .

102 103 104

2.2 2.3 2.4 2.5 2.6

v

. . . . . . . . . . . . and TLB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

18 23 24 24 25

29 47 48

List of Tables 2.1 2.2 2.3

Security-related events . . . . . . . . . . . . . . . . . . . . . . . . . . . . Event preconditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Event postconditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

21 22 22

3.1 3.2

Platform actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Preconditions and error codes . . . . . . . . . . . . . . . . . . . . . . . .

62 71

4.1 4.2

DNSSEC model events . . . . . . . . . . . . . . . . . . . . . . . . . . . . Error messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

94 96

vii

Acknowledgments En primer lugar, quiero agradecer a mi director acad´emico y de tesis, Gustavo Betarte (Gustun), quien me brind´o todo el apoyo necesario para la realizaci´on de esta tesis. Su trabajo, gu´ıa y motivaci´on constantes fueron claves para mi durante estos a˜ nos. Mil gracias Gustun! A Gilles Barthe, por empe˜ narse –junto con Gustun– en que concretara el doctorado. Haber trabajado y publicado junto a ´el, en temas de virtualizaci´on, ha sido para mi una experiencia muy valiosa, de la cual aprend´ı mucho. Gracias Gilles tambi´en por los comentarios sobre versiones preliminares de este documento. A los coautores de los art´ıculos (en Argentina y Uruguay) que constituyen la base de esta tesis. En particular, a Santiago Zanella B´eguelin, Ramin Roushani, Juan Manuel Crespo, Gustavo Mazeikis, Julio P´erez, Jes´ us Mauricio Chimento y Ezequiel Baz´ an. A Juan Diego Campo, un gran compa˜ nero de trabajo, con quien compartimos el camino del doctorado en este u ´ltimo tiempo, y muchas jornadas de trabajo en torno a dos proyectos de investigaci´on. Su tesis ser´a sin dudas memorable! A los miembros del tribunal, y en particular a los revisores, por sus comentarios y correcciones. Asimismo, a los revisores an´onimos de los art´ıculos referidos. Al PEDECIBA Inform´ atica y al Instituto de Computaci´ on (InCo) de la Facultad de Ingenier´ıa, por darme la oportunidad. A mis compa˜ neras y compa˜ neros de grupo; tanto en el ´area de m´etodos formales como de seguridad inform´ atica. De distintas maneras, contribuyeron con este trabajo. A las y los docentes que influyeron en mi iniciaci´on en el campo de la investigaci´ on. En particular, a Jorge Aguirre, Gilles Barthe, Grabriel Baum, Gustavo Betarte, Cristina Cornes, Alberto Pardo, Nora Szasz y Alvaro Tasistro. A la Universidad Nacional de R´ıo Cuarto y la Universidad Nacional de Rosario (Argentina); a la Universidad de la Rep´ ublica y la Universidad ORT (Uruguay), por brindarme la posibilidad de participar en estos a˜ nos de una vida acad´emica, que tanto me apasiona. A mis colegas y amigos, por acompa˜ narme y aguantarme en este camino... Finalmente, agradezco a mi familia (de aqu´ı y de all´a) el apoyo y la comprensi´ on durante tanto tiempo! Dedico especialmente esta tesis a ellas: Juli, Sofi, Mayru (mis tres ni˜ nas) y J´ ose. Este trabajo fue financiado, parcialmente, por: i) STEVE: Security Through Verifiable Evidence (PDT 63/118, FCE 2006, DINACYT, Uruguay, 2007-2009); ii) VirtualCert: Towards a Certified Virtualization Platform (ANII-Clemente Estable, PR-FCE2009-1-2568, Uruguay, 2010-2012); and iii) VirtualCert: Towards a Certified Virtualization Platform - Phase II (UDELAR-CSIC I+D, Uruguay, 2013-2015).

1

Chapter 1

Introduction There are multiple definitions of the term safety-critical system. In particular, if a system failure could lead to consequences that are determined to be unacceptable, then the system is safety-critical. In essence, a system is safety-critical when we depend on it for our welfare. In this thesis, we investigate the security of three areas of safety-critical applications: 1. mobile devices: with increasing capabilities in mobile devices (like cell phones and PDAs) and posterior consumer adoption, these devices have become an integral part of how people perform tasks in their works and personal lives. Unfortunately, the benefits of using mobile devices are sometimes counteracted by security risks; 2. virtualization platforms: hypervisors allow multiple operating systems to run in parallel on the same hardware, by ensuring isolation between their guest operating systems. In effect, hypervisors are increasingly used as a means to improve system flexibility and security; unfortunately, they are often vulnerable to attacks; 3. domain name systems: the domain name systems constitute distributed databases that provide support to a wide variety of network applications such as electronic mail, WWW, and remote login. The important and critical role of these systems in software systems and networks makes them a prime target for (formal) verification.

1.1

Security policies

In general, security (mainly) involves the combination of confidentiality, integrity and availability [1]. Confidentiality can be understood as the prevention of unauthorized disclosure of information; integrity, as the prevention of unauthorized modification of information; and availability, as the prevention of unauthorized withholding of information or resources [2]. However, this list is incomplete. We distinguish between security models and security policies. The security model term could be interpreted as the representation of security systems confidentiality, integrity and availability requirements [3]. The more general usage of the term specifies a particular mechanism, such as access control, for enforcing (in particular) confidentiality, which has been adopted for computer security. Security issues arise in many different contexts; so many security models have been developed. 3

4

Introduction

A security policy is a specification of the protection goals of a system. This specification determines the security properties that the system should possess. A security policy defines executions of a system that, for some reason, are considered unacceptable [4]. For instance, a security policy may affect: • access control, and restrict the operations that can be performed on elements; • availability, and restrict processes from denying others the use of a resource; • information flow, and restrict what can be inferred about objects from observing system behavior. To formulate a policy is necessary to describe the entities governed by the policy and set the rules that constitute the policy. This could be done informally in a natural language document. However, approaches based on natural languages fall short of providing the required security guarantees. Because of the ambiguity and imprecision of informal specification, it is difficult to verify the correctness of the system. To avoid these problems, formal security models are considered.

1.2

Security models

Security models play an important role in the design and evaluation of high assurance security systems. Their importance was already pointed out in the Anderson report [5]. The first model to appear was Bell-LaPadula [6], in 1973, in response to US Air Force concerns over the confidentiality of data in time-sharing mainframe systems. In the last decades, some countries developed specific security evaluation standards [7]. This initiative opened the path to worldwide mutual recognition of security evaluation results. In this direction, new Common Criteria (CC) [8] was developed. CC defines seven levels of assurance for security systems (EAL 1–7). In order to obtain a higher assurance than EAL 4, developers require specification of a security model for security systems and verify security using formal methods. For example, the Gemalto and Trusted Logic companies obtained the highest level of certification (EAL 7) for their formalization of the security properties of the JavaCard platform [9, 10, 11], using the Coq proof assistant [12, 13]. State machines are a powerful tool used for modeling many aspects of computing systems. In particular, they can be employed to specify security models. The basic features of a state machine model are the concepts of state and state change. A state is a representation of the system under study at a given time, which should capture those system aspects relevant to our problem. Possible state transitions can be specified by a state transition function that defines the next state based on the current state and input. An output can also be produced. If we want to analize a specific safety property of a system, like security, using a state machine model, we must first identify all the states that satisfy the property. Then we can to check if all state transitions preserve this property. If this is the case and if the system starts in an initial state that has this property, then we can prove by induction that the property will always be fulfilled. Thus, state machines can be used to enforce a collection of security policies on a system.

1.3 Reasoning about implementations and models

1.3

Reasoning about implementations and models

The increasingly important role of critical components in software systems (as hypervisors in virtual platforms or access control mechanisms in mobile devices) make them a prime target for formal verification. Indeed, several projects have set out to formally verify the correctness of critical system implementations, e.g. [14, 15, 16]. Reasoning about implementations provides the ultimate guarantee that deployed critical systems provide the expected properties. There are however significant hurdles with this approach, especially if one focuses on proving security properties rather than functional correctness. First, the complexity of formally proving non-trivial properties of implementations might be overwhelming in terms of the effort it requires; worse, the technology for verifying some classes of security properties may be underdeveloped: specifically, liveness properties are generally hard to prove, and there is currently no established method for verifying, using tools, security properties involving two system executions, a.k.a. 2-properties, for implementations, making their formal verification on large and complex programs exceedingly challenging. For instance, operating system isolation property is a 2-safety property [17, 18] that requires reasoning about two program executions. Second, many implementation details are orthogonal to the security properties to be established, and may complicate reasoning without improving the understanding of the essential features for guaranteeing important properties. Thus, there is a need for complementary approaches where verification is performed on idealized models that abstract away from the specifics of any particular implementation, and yet provide a realistic setting in which to explore the security issues that pertain to the realm of those critical systems.

1.4

Formal analysis of security models for critical systems

In this section we introduce the research lines —in the three domains of safety-critical applications listed above— and contributions of the thesis. The work was developed in the context of the following research projects [19]: • ReSeCo: Reliability and Security of Distributed Software Components (STICAMSUD, 2006-2009); • STEVE: Security Through Verifiable Evidence (PDT 63/118, FCE 2006, DINACYT, Uruguay, 2007-2009); • Especificaci´on Formal y Verificaci´on de Sistemas Cr´ıticos (SeCyT-FCEIA ING266, UNR, Argentina, 2009-2010); • VirtualCert: Towards a Certified Virtualization Platform (ANII-Clemente Estable, PR-FCE-2009-1-2568, Uruguay, 2010-2012); • VirtualCert: Towards a Certified Virtualization Platform - Phase II (UDELARCSIC I+D, Uruguay, 2013-2015).

1.4.1

Mobile devices

Today, even entry-level mobile devices (e.g. cell phones) have access to sensitive personal data, are subscribed to paid services and can establish connections with external entities.

5

6

Introduction

Users of such devices may, in addition, download and install applications from untrusted sites at their will. This flexibility comes at a cost, since any security breach may expose sensitive data, prevent the use of the device, or allow applications to perform actions that incur a charge for the user. It is therefore essential to provide an application security model that can be relied upon—the slightest vulnerability may imply huge losses due to the scale the technology has been deployed. Java Micro Edition (JME) [20] is a version of the Java platform targeted at resourceconstrained devices which comprises two kinds of components: configurations and profiles. A configuration is composed of a virtual machine and a set of APIs that provide the basic functionality for a particular category of devices. Profiles further determine the target technology by defining a set of higher level APIs built on top of an underlying configuration. This two-level architecture enhances portability and enables developers to deliver applications that run on a wide range of devices with similar capabilities. This work concerns the topmost level of the architecture which corresponds to the profile that defines the security model we formalize. The Connected Limited Device Configuration (CLDC) [21] is a JME configuration designed for devices with slow processors, limited memory and intermittent connectivity. CLDC together with the Mobile Information Device Profile (MIDP) provides a complete JME runtime environment tailored for devices like cell phones and personal data assistants. MIDP defines an application life cycle, a security model and APIs that offer the functionality required by mobile applications, including networking, user interface, push activation and persistent local storage. Many mobile device manufacturers have adopted MIDP since the specification was made available. Nowadays, literally billions of MIDP enabled devices are deployed worldwide and the market acceptance of the specification is expected to continue to grow steadily. The security model of MIDP has evolved since it was first introduced. In MIDP 1.0 [22] and MIDP 2.0 [23] the main goal is the protection of sensitive functions provided by the device. In MIDP 3.0 [24] the protection is extended to the resources of an application, which can be shared with other applications. Some of these features are incorporated in the Android security model [25, 26, 27] for mobile devices, as we will see in Section 2.6. Contributions The contributions of the work in this domain are manyfold: 1. We describe a formal specification of the MIDP 2.0 security model that provides an abstraction of the state of a device and security-related events that allows us to reason about the security properties of the platform where the model is deployed; 2. The security of the desktop edition of the Java platform (JSE) relies on two main modules: a Security Manager and an Access Controller. The Security Manager is responsible for enforcing a security policy declared to protect sensitive resources; it intercepts sensitive API calls and delivers permission and access requests to the Access Controller. The Access Controller determines whether the caller has the necessary rights to access resources. While in the case of the JSE platform, there exists a high-level specification of the Access Controller module (the basic mechamism is based on stack inspection [28]), no equivalent formal specification exists for JME. We illustrate the pertinence of the specification of the MIDP 2.0

1.4 Formal analysis of security models for critical systems security model that has been developed by specifying and proving the correctness of an access control module for the JME platform; 3. The latest version of MIDP, MIDP 3.0, introduces a new dimension in the security model at the application level based on the concept of authorizations, allowing delegation of rights between applications. We extend the formal specification developed for MIDP 2.0 to incorporate the changes introduced in MIDP 3.0, and show that this extension is conservative, in the sense that it preserves the security properties we proved for the previous model; 4. Besson, Duffay and Jensen [29] put forward an alternative access control model for mobile devices that generalizes the MIDP model by introducing permissions with multiplicites, extending the way permissions are granted by users and consumed by applications. One of the main outcomes of the work we report here is a general framework that sets up the basis for defining, analyzing and comparing access control models for mobile devices. In particular, this framework subsumes the security model of MIDP and several variations, including the model defined in [29].

1.4.2

Virtualization platforms

Hypervisors allow several operating systems to coexist on commodity hardware, and provide support for multiple applications to run seamlessly on the guest operating systems they manage. Moreover, hypervisors provide a means to guarantee that applications with different security policies can execute securely in parallel, by ensuring isolation between their guest operating systems. In effect, hypervisors are increasingly used as a means to improve system flexibility and security, and authors such as [30] already in 2007 predicted that their use would become ubiquitous in enterprise data centers and cloud computing. The increasingly important role of hypervisors in software systems makes them a prime target for formal verification. Indeed, several projects have set out to formally verify the correctness of hypervisor implementations. One of the most prominent initiatives is the Microsoft Hyper-V verification project [14, 15, 31], which has made a number of impressive achievements towards the functional verification of the legacy implementation of the Hyper-V hypervisor, a large software component that combines C and assembly code (about 100 kLOC of C and 5kLOC of assembly). The overarching objective of the formal verification is to establish that a guest operating system cannot observe any difference between executing through the hypervisor or directly on the hardware. The other prominent initiative is the L4.verified project [16], which completed the formal verification of the seL4 microkernel, a general purpose operating system of the L4 family. The main thrust of the formal verification is to show that an implementation of the microkernel correctly refines an abstract specification. Subsequent machine-checked developments prove that seL4 enforces integrity, authority confinement [32] and intransitive non-interference [33]. The formalization does not model cache. Reasoning about implementations provides the final guarantee that deployed hypervisors provide the expected properties. There are however, as mentioned previously, serious difficulties with this approach. For example, many implementation details are orthogonal to the security properties to be established, and may complicate reasoning

7

8

Introduction

without improving the understanding of the essential features for guaranteeing isolation among guest operating systems. Therefore, there is a need for complementary approaches where verification is performed on idealized models that abstract away from the specifics of any particular hypervisor, and yet provide a realistic environment in which to investigate the security problems that pertain to the realm of hypervisors. Contributions The work presented here initiates such an approach by developing a minimalistic model of a hypervisor, and by formally proving that the hypervisor correctly enforces isolation between guest operating systems, and under mild hypotheses guarantees basic availability properties to guest operating systems. In order to achieve some reasonable level of tractability, our model is significantly simpler than the setting considered in the Microsoft Hyper-V verification project, it abstracts away many specifics of memory management such as shadow page tables (SPTs) and of the underlying hardware and runtime environment such as I/O devices. Instead, our model focuses on the aspects that are most relevant for isolation properties, namely read and write resources on machine addresses, and is sufficiently complete to allow us to reason about isolation properties. Specifically, we show that an operating system can only read and modify memory it owns, and a non-influence property [34] stating that the behavior of an operating system is not influenced by other operating systems. In addition, our model allows reasoning about availability; we prove, under reasonable conditions, that all requests of a guest operating system to the hypervisor are eventually attended, so that no guest operating system waits indefinitely for a pending request. Overall, our verification effort shows that the model is adequate to reason about safety properties (read and write isolation), 2-safety properties (OS isolation), and liveness properties (availability). Additionally, in this work we present an implementation of a hypervisor in the programming language of Coq, and a proof that it realizes the axiomatic semantics, on an extended memory model with a formalization of the cache and Translation Lookaside Buffer (TLB). Although it remains idealized and far from a realistic hypervisor, the implementation arguably provides a useful mechanism for validating the axiomatic semantics. The implementation is total, in the sense that it computes for every state and action a new state or an error. Thus, soundness is proved with respect to an extended axiomatic semantics in which transitions may lead to errors. An important contribution in this part of the work is a proof that OS isolation remains valid for executions that may trigger errors. Finally, we show that virtualized platforms are transparent, i.e. a guest operating system cannot distinguish whether it executes alone or together with other guest operating systems on the platform.

1.4.3

Domain name systems

The Domain Name System (DNS) [35, 36] constitutes a distributed database that provides support to a wide variety of network applications such as electronic mail, WWW, and remote login. The database is indexed by domain names. A domain name represents a path in a hierarchical tree structure, which in turn constitutes a domain name space. Each node of this tree is assigned a label, thus, a domain name is built as a sequence of labels separated by a dot, from a particular node up to the root of the tree.

1.4 Formal analysis of security models for critical systems A distinguishing feature of the design of DNS is that the administration of the system can be distributed among several (authoritative) name servers. A zone is a contiguous part of the domain name space that is managed by a set of authoritative name servers. Then, distribution is achieved by delegating part of a zone administration to a set of delegated sub-zones. DNS is a widely used scalable system, but it was not conceived with security concerns in mind, as it was designed to be a public database with no intentions to restrict access to information. Nowadays, a large amount of distributed applications make use of domain names. Confidence on the working of those aplications depends critically on the use of trusted data: fake information inside the system has been shown to lead to unexpected and potentially dangerous problems. Already in the early 90’s serious security flaws were discovered by Bellovin and eventually reported in [37]. Different types of security issues concerning the working of DNS have been discussed in the literature, see, for instance, [38, 37, 39, 40, 41, 42]. Identified vulnerabilities of DNS make it possible to launch different kinds of attacks, namely: cache poisoning, client flooding, dynamic update vulnerability, information leakage and compromise of the DNS servers authoritative database [43, 37, 44, 45, 46]. Domain Name System Security Extensions (DNSSEC) [47, 48, 49] is a suite of Internet Engineering Task Force (IETF) specifications for securing information provided by DNS. More specifically, this suite specifies a set of extensions to DNS which are oriented to provide mechanisms that support authentication and integrity of DNS data but not its availability or confidentiality. In particular, the security extensions were designed to protect resolvers from forged DNS data, such as the one generated by DNS cache poisoning [45, 45, 46], by digitally signing DNS data using public-key cryptography. The keys used to sign the information are authenticated via a chain of trust, starting with a set of verified public keys that belong to the DNS root zone, which is the trusted third party. The DNSSEC standards were finally released in 2005 and a number of testbeds, pilot deployments, and services have been rolled out in the last few years [50, 51, 52, 53, 54, 55, 56]. In particular, the main objective of the OpenDNSSEC project [55] is to develop an open source software that manages the security of domain names on the Internet. Contributions The important and critical role of DNSSEC in software systems and networks makes it a prime target for formal analysis. This thesis presents the development of a minimalistic specification of a DNSSEC model, and yet provides a realistic setting in which to explore the security issues that pertain to the realm of DNS. The specification puts forward an abstract formulation of the behavior of the protocol and the corresponding securityrelated events, where security goals, such as the prevention of cache poisoning attacks, can be given a formal treatment. In particular, the formal model provides the grounds needed to formally state and verify security properties concerning the chain of trust generated along the DNSSEC tree.

1.4.4

Personal contributions

The thesis builds upon and extends a number of previously published papers: 1. Betarte, G., Luna, C., Zanella B´eguelin, S.: A formal specification of the MIDP 2.0 security model. In: 4th International workshop on Formal Aspects in Security

9

10

Introduction and Trust, FAST 2006. Volume 4691 of Lecture Notes in Computer Science, Springer-Verlag (2006) 220–234;

2. Betarte, G., Luna, C., Roushani Oskui, R.: A certified access controller for JMEMIDP 2.0 enabled mobile devices. In: 2009 International Conference of the Chilean Computer Science Society, SCCC 2009, Los Alamitos, CA, USA, IEEE Computer Society (2009) 51–58; 3. Betarte, G., Luna, C., Mazeikis, G.: Formal specification and analysis of the MIDP 3.0 security model. In: 2009 International Conference of the Chilean Computer Science Society, SCCC 2009, Los Alamitos, CA, USA, IEEE Computer Society (2009) 59–66; 4. Betarte, G., Crespo, J., Luna, C.: A framework for the analysis of access control models for interactive mobile devices. In: Types for Proofs and Programs, International Conference, TYPES 2008. Volume 5497 of Lecture Notes in Computer Science, Springer-Verlag (2009) 49–63; 5. Barthe, G., Betarte, G., Campo, J., Luna, C.: Formally verifying isolation and availability in an idealized model of virtualization. In Butler, M., Schulte, W., eds.: Formal Methods 2011. Volume 6664 of Lecture Notes in Computer Science, Springer-Verlag (2011) 231–245; 6. Barthe, G., Betarte, G., Campo, J.D., Chimento, J.M., Luna, C.: Formally verified implementation of an idealized model of virtualization. In Matthes, R., Schubert, A., eds.: 19th International Conference on Types for Proofs and Programs (TYPES 2013). Volume 26 of Leibniz International Proceedings in Informatics (LIPIcs)., Dagstuhl, Germany, Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik (2014) 45–63; 7. Betarte, G., Eixarch, E. B., Luna, C. D.: A formal specification of the DNSSEC model. Electronic Communications of the European Association of Software Science and Technology (ECEASST) 48 (2011) 1–21. In each of the papers that constitute the core of this thesis, I worked on the analysis of different security models considered, in the development of formal specifications, the formulation of relevant security properties, demonstrating many of these properties, and building certified implementations. I have also collaborated in the analysis of results and writing of those papers. Finally, I participated in the following works directly related to this thesis, but which are not part of it: • Barthe, G., Betarte, G., Campo, J., Luna, C.: Cache-Leakage Resilient OS Isolation in an Idealized Model of Virtualization. In: IEEE 25th Computer Security Foundations Symposium (2012) 186–197; • Barthe, G., Betarte, G., Campo, J., Luna, C., Pichardie, D.: System-level noninterference for constant-time cryptography. To appear in proceedings of the 21st ACM Conference on Computer and Communications Security (2014). Throughout the development of the thesis, and in the context of research projects involved, I have officiated as (co)director of the final degree projects of the following

1.5 Formal language used students (in Argentina and Uruguay): Santiago Zanella B´eguelin, Ramin Roushani Oskui, Juan Manuel Crespo, Gustavo Mazeikis, Julio P´erez, Jes´ us Mauricio Chimento, and Ezequiel Baz´an.

1.5 1.5.1

Formal language used Coq

The Coq proof assistant [12, 13] is a free open source software that provides a (dependently typed) functional programming language and a reasoning framework based on higher order logic to perform proofs of programs. Coq allows developing mathematical facts. This includes defining objects (sets, lists, functions, programs); making statements (using basic predicates, logical connectives and quantifiers); and finally writing proofs. The Coq environment supports advanced notations, proof search and automation, and modular developments. It also provides program extraction towards languages like Ocaml and Haskell for execution of (certified) algorithms [57, 58]. These features are very useful to formalize and reason about complex specifications and programs. As examples of its applicability, Coq has been used as a framework for formalizing programming environments and designing special platforms for software verification: Leroy and others developed in Coq a certified optimizing compiler for a large subset of the C programming language [59]; Barthe and others used Coq to develop Certicrypt, an environment of formal proofs for computational cryptography [60]. Also, as mentioned previously, the Gemalto and Trusted Logic companies obtained the level CC EAL 7 of certification for their formalization, developed in Coq, of the security properties of the JavaCard platform. We developed our specifications in the Calculus of Inductive Constructions (CIC) [61, 62, 63] using Coq. The CIC is a type theory, in brief, a higher order logic in which the individuals are classified into a hierarchy of types. The types work very much as in strongly typed functional programming languages which means that there are basic elementary types, types defined by induction, like sequences and trees, and function types. An inductive type is defined by its constructors and its elements are obtained as finite combinations of these constructors. Data types are called “Sets” in the CIC (in Coq). When the requirement of finiteness is removed we obtain the possibility of defining infinite structures, called coinductive types, like infinite sequences. On top of this, a higher-order logic is available which serves to predicate on the various data types. The interpretation of the propositions is constructive, i.e. a proposition is defined by specifying what a proof of it is and a proposition is true if and only if a proof of it has been constructed. The type of propositions is called Prop.

1.5.2

Notation

We use standard notation for equality and logical connectives. Implication and universal quantification may be encoded in Coq using dependent product, while equality and the other connectives can be defined inductively. Anonymous predicates are introduced using lambda notation, e.g. (λn : nat. n = 0) : nat → P rop is a predicate that when applied to n, is true iff n is zero.

11

12

Introduction We extensively use record types; a record type definition R def = [[ f ield1 : T1 , . . . , f ieldn : Tn ]]

(1.1)

generates a non-recursive inductive type with just one constructor, namely mkR, and projections functions f ieldi : R → Ti . We write ha1 , . . . , an i instead of mkR a1 . . . an when the type R is obvious from the context. Application of projections functions is abbreviated using dot notation (i.e. f ieldi r = r.f ieldi ). Field update of a record r is written as r.[f ieldi := v]; we also use simultaneous field update, which is defined in the usual way. For each field f ieldi in a record type we define a binary relation ≡f ieldi over objects of the type as r1 ≡f ieldi r2 def = ∀ j, j 6= i → r1 .f ieldj = r2 .f ieldj

(1.2)

We define an inductive relation I by giving introduction rules of the form P1 · · · Pm rule I x1 . . . x n

(1.3)

where free occurrences of variables are universally quantified. Similarly, inductive types are defined by giving constructors in the following form def

T =

| C1 : A1,1 → . . . A1,n1 → T .. . | Cm : Am,1 → . . . Am,nm → T

(1.4)

where C1 . . . Cm are the constructors of T . The following parametric (co)inductive types are assumed to be predefined: • option A with constructors N one : option A and Some : A → option A; • finite lists over A, list A. The empty list is denoted by [] and the (infix) constructor that inserts an element a at the front of a list s is denoted by a ⊲ s. Finite snoc lists over A, snocList A, that is, lists that are constructed by inserting elements at the back, are also used. [] denotes the empty snoc list and s ⊳ a denotes the insertion of an element a at the back of the snoc list s. The symbol ⊕ stands for the concatenation operator on lists; • infinite lists. The type of streams of type A is written [A]∞ . Objects of type [A]∞ are constructed with the (infix) operator ::, hence x :: xs is of type [A]∞ whenever x is of type A and xs is of type [A]∞ . Given s : [A]∞ we let s[i] denote the i-th element of s; • sets of type A are defined as set A, where set is an inductive type encoding sets as lists. We use {a0 , . . . , an }A to denote the set of elements a0 , . . . , an of type A. When the type A is obvious from the context, we write {a0 , . . . , an }. Classical notation is used for set operations (∪, ∩, \, ∈, ⊆). Moreover, we make an extensive use of partial maps, and bounded partial maps. The type of partial maps from objects of type A into objects of type B is written A 7→ B, and the type of partial maps from A to B whose domain is of size smaller or equal to

1.6 Document outline k (where k is a natural number) is written as A 7→k B. Application of a map m on an object a of type A is denoted m[a] and map update is written m[a := b], where b overwrites the value, if any, associated to the key a. In the formal constructions presented in Section 2.5 we use inductive types that have constructors tagged either as valid or invalid. In that section we adopt the convention that a type with the same name but prefixed with the valid keyword is the subtype consisting of terms derivable using only constructors tagged as valid.

1.6

Document outline

The remainder of this document is organized as follows. In Chapter 2 we study and compare formally several variants of the security model specified by MIDP to access sensitive resources of a device. In Chapter 3 we present the formalization of an idealized model of a hypervisor, and we establish (formally) that the hypervisor ensures strong isolation properties between the different operating systems, and guarantees that requests from guest operating systems are eventually attended. We then develop a certified implementation of a hypervisor —in the programming language of Coq— on an extended memory model with a formalization of the cache and TLB, and we analyze also security properties for the extended model with execution errors. In Chapter 4 we present a minimalistic specification of a DNSSEC model which provides the grounds needed to formally state and verify security properties concerning the chain of trust of the DNSSEC tree. Finally, the conclusions of the research are formulated in Chapter 5, along with some statistics on the formal developments in Coq and the possible lines of future work.

13

Chapter 2

Formal analysis of security models for mobile devices 2.1

Introduction

JME technology provides integral mechanisms that guarantee security properties for mobile devices, defining a security model which restricts access to sensitive functions for badly written or maliciously written applications. The architecture of JME is based upon two layers above the virtual machine: the configuration layer and the profile layer. The Connected Limited Device Configuration (CLDC) is a minimum set of class libraries to be used in devices with low processors, limited working memory and capacity to establish low broad band communications. This configuration is complemented with the Mobile Information Device Profile (MIDP) to obtain a run-time environment suitable to mobile devices such as mobile phones, personal digital assistants and pagers. The profile also defines an application life cycle, a security model and APIs that provide the functionality required by mobile applications, such as networking, user interface, push activation, and local persistent storage.

2.1.1

MIDP security model

A security model at the platform level, based on sets of permissions to access the functions of the device, is defined in the first version of MIDP (version 1.0) and refined in the second version (version 2.0). In the third version (version 3.0) a new dimension to the model is introduced: the security at the application level. The security model at the application level is based on authorizations, so that an application can access the resources shared by another application. In this section we describe the two security levels considered by MIDP in its different versions. Security at the platform level In the original MIDP 1.0 specification, any application not installed by the device manufacturer or a service provider runs in a sandbox that prohibits access to security sensitive APIs or functions of the device (e.g. push activation). Although this sandbox security model effectively prevents any rogue application from jeopardizing the security of the 15

16

Formal analysis of security models for mobile devices

device, it is excessively restrictive and does not allow many useful applications to be deployed after issuance. In contrast, MIDP 2.0 introduces a new security model at the platform level based on the concept of protection domain. A protection domain specifies a set of permissions and the modes in which those permissions can be granted to applications bound to the domain. Each API or function on the device may define permissions in order to prevent it from being used without authorization. Permissions can be either granted in an unconditional way, and used without requiring any intervention from the user, or in a conditional way, specifying the frequency with which user interaction will be required to renew the permission. More concretely, for each permission a protection domain specifies one of three possible modes: • blanket: permission is granted for the whole application life cycle; • session: permission is granted until the application is closed; • oneshot: permission is granted for one single use. The set of permissions effectively granted to an application is determined by the permissions declared in its descriptor, the protection domain it is bound to, and the answers received from the user to previous requests. A permission is granted to an application if it is declared in its application descriptor and either its protection domain unconditionally grants the permission, or the user was previously asked to grant the permission and its authorization remains valid. The MIDP 2.0 model distinguishes between untrusted and trusted applications. An untrusted application is one whose origin and integrity can not be determined. These applications, also called unsigned, are bound to a protection domain with permissions equivalent to those in a MIDP 1.0 sandbox. On the other hand, a trusted application is identified and verified by means of cryptographic signatures and certificates based on the X.509 Public Key Infrastructure [64]. These signed applications can be bound to more specific and permissive protection domains. Thus, this model enables applications developed by trusted third parties to be downloaded and installed after issuance of the device without compromising its security. Security at the application level In MIDP 3.0 applications which are bound to different protection domains may share data, components, and events. The Record Management System (RMS) API specifies methods that make it possible for record stores from one application to be shared with other applications. With the Inter-MIDlet Communication (IMC) protocol an application can access a shareable component from another one running in a different execution environment. Applications that register listeners or register themselves will only be launched in response to events triggered by authorized applications. By restricting access to shareable resources of an application, such as data, runtime IMC communication and events, the MIDP 3.0 security model introduces a new dimension at the application level. This new dimension introduces the concept of access authorization, which enables an application to individually control access requests from other applications. An application that intends to restrict access to its shareable resources must declare access authorizations in its application descriptor. There are four different types of access

2.2 Formally verifying security properties of MIDP 2.0 authorization declarations involving one or more of the following application credentials: domain name, vendor name, and signing certificates. These four types apply respectively to: 1. applications bound to a specific domain; 2. applications from a certain vendor with a certified signature; 3. applications from a certain vendor but whithout a certified signature; 4. applications signed under a given certificate. When an application tries to access the shareable resources of another application, its credentials are compared with those required by the access authorization declaration. If there is a mismatch the application level access is denied.

2.1.2

Contents of the chapter

This chapter builds upon and extends a number of previously published papers [65, 66, 67, 68]. The rest of the chapter is organized as follows: Section 2.2 overviews the formalization of the MIDP 2.0 security model, presents some of its verified properties and proposes a methodology to refine the specification and obtain an executable prototype. Section 2.3 describes a high level formal specification of an access controller for JME-MIDP 2.0, and presents an algorithm that has been proved correct with respect to that specification. Section 2.4 develops extensions of the formal specification presented in Section 2.2 concerning the changes introduced by MIDP 3.0, and puts forward some weaknesses of the security mechanisms introduced in that (latest) version of the profile. Section 2.5 presents a framework suitable for defining, analyzing, and comparing the access control policies that can be enforced by (variants of) the security models considered in this work, and to prove desirable properties they should satisfy. Then Section 2.6 discusses related work and finally Section 2.7 concludes with a summary of the chapter.

2.1.3

Formalization

The full development is available for download at http://www.fing.edu.uy/inco/grupos/gsi/sources/midp

2.2

Formally verifying security properties of MIDP 2.0

In this section we overview the formalization of the MIDP 2.0 security model presented in [65]. We begin, in Section 2.2.1, by introducing some types and constants used in the remainder of the formalization, we then define the set of valid device states and security-related events, give a transition semantics for events based on pre- and postconditions and define the concept of a session. Section 2.2.2 reports on some security properties that we have verified, while Section 2.2.3 presents and discusses a proposal for constructing a refinement of the formal specification to obtain an executable prototype.

17

18

Formal analysis of security models for mobile devices

MIDlet-Jar-URL: http://www.foo.com/foo.jar MIDlet-Jar-Size: 90210 MIDlet-Name: Organizer MIDlet-Vendor: Foo Software MIDlet-Version: 2.1 MIDlet-1: Alarm, alarm.png, organizer.Alarm MIDlet-2: Agenda, agenda.png, organizer.Agenda MIDlet-Permissions: javax.microedition.io.Connector.socket, javax.microedition.io.PushRegistry MIDlet-Permissions-Opt: javax.microedition.io.Connector.http MicroEdition-Configuration: CLDC-1.1 MicroEdition-Profile: MIDP-2.0 MIDlet-Certificate-1-1: MIICgDCCAekCBEH11wYwDQ... MIDlet-Jar-RSA-SHA1: pYRJ8Qlu5ITBLxcAUzYXDKnmg...

Figure 2.1: A typical MIDlet descriptor, specifying the required and optional permission of suite for a suite with two applications

2.2.1

Formalization of the MIDP 2.0 security model

Sets and constants In MIDP, applications (usually called MIDlets) are packaged and distributed as suites. A suite may contain one or more MIDlets and is distributed as two files, an application descriptor file and an archive file that contains the actual Java classes and resources. A suite that uses protected APIs or functions should declare the corresponding permissions in its descriptor either as required for its correct functioning or as optional. The content of a typical MIDlet descriptor is shown in Figure 2.1. Let P ermission be the total set of permissions defined by every protected API or function on the device and Domain the set of all protection domains. Let us introduce, as a way of referring to individual MIDlet suites, the set SuiteID of valid suite identifiers. We will represent a descriptor as a record composed of two predicates, required and optional, that identify respectively the set of permissions declared as required and those declared as optional by the corresponding suite, Descriptor def = [[ required : P ermission → P rop, optional : P ermission → P rop ]]

(2.1)

A record type is used to represent an installed suite, with fields for its identifier, associated protection domain and descriptor, Suite def = [[ id : SuiteID, domain : Domain, descriptor : Descriptor ]]

(2.2)

Permissions may be granted by the user to an active MIDlet suite in either of three modes, for a single use (oneshot), as long as the suite is running (session), or as long as the suite remains installed (blanket). Let M ode be the enumerated set of user interaction modes {oneshot, session, blanket} and ≤m an order relation such that oneshot ≤m session ≤m blanket

(2.3)

2.2 Formally verifying security properties of MIDP 2.0

19

We will assume for the rest of the formalization that the security policy of the protection domains on the device is an anonymous constant of type P olicy def = [[ allow : Domain → P ermission → P rop, user : Domain → P ermission → M ode → P rop ]]

(2.4)

which for each domain specifies at most one mode for a given permission, (∀ d p, allow d p → ∀ m, ¬user d p m) ∧ (∀ d p m, user d p m → ¬allow d p ∧ ∀ m′ , user d p m′ → m = m′ )

(2.5)

and such that allow d p holds when domain d unconditionally grants the permission p and user d p m holds when domain d grants permission p with explicit user authorization and maximum allowable mode m (w.r.t. ≤m ). The permissions effectively granted to a MIDlet suite are the intersection of the permissions requested in its descriptor with the union of the permissions given unconditionally by its domain and those given explicitly by the user. Device state To reason about the MIDP 2.0 security model most details of the device state may be abstracted; it is sufficient to specify the set of installed suites, the permissions granted or revoked to them and the currently active suite in case there is one. The active suite, and the permissions granted or revoked to it for the session are grouped into a record structure SessionInf o def = [[ id : SuiteID, granted : P ermission → P rop, (2.6) revoked : P ermission → P rop ]] The abstract device state is described as a record of type State def = [[ suite : Suite → P rop, session : option SessionInf o, granted : SuiteID → P ermission → P rop, revoked : SuiteID → P ermission → P rop ]]

(2.7)

where suite is the characteristic predicate of the set of installed suites. Example 2.2.1. Consider a MIDlet that periodically connects to a webmail service using HTTPS when possible or HTTP otherwise, and alerts the user whenever they have new mail. The suite containing this MIDlet should declare in its descriptor as required permissions p push, for accessing the PushRegistry (for timer-based activation), and p http, for using the HTTP protocol API. It should also declare as optional the permission p https for accessing the HTTPS protocol API. Suppose that upon installation, the suite (whose identifier is id) is recognized as trusted and is thus bound to a protection domain dom that allows access to the PushRegistry API unconditionally but by default requests user authorization for opening every HTTP or HTTPS connection. Suppose also that the domain allows the user to grant the MIDlet the permission for opening further connections as long as the suite remains installed. Then, the security policy satisfies: allow dom p push ∧ user dom p http blanket ∧ (2.8) user dom p https blanket

20

Formal analysis of security models for mobile devices

If st is the state of the device, the suite is represented by some ms : Suite such that st.suite ms and ms.id = id hold. Its descriptor ms.descriptor satisfies ms.descriptor.required p push ∧ ms.descriptor.required p http ∧ ms.descriptor.optional p https

(2.9)

The MIDlet will have unlimited access to the PushRegistry applet, but will have to request user authorization every time it makes a new connection. The user may choose at any time to authorize further connections by granting the corresponding permission in blanket mode, thus avoiding being asked for authorization each time the applet communicates with the webmail service. The remainder of this subsection enumerates the conditions that must hold for an element s : State in order to represent a valid state for a device. 1. A MIDlet suite can be installed and bound to a protection domain only if the set of permissions declared as required in its descriptor are a subset of the permissions the domain offers (with or without user authorization). This compatibility relation between des : Descriptor and dom : Domain can be stated formally as follows, des ≀ dom def = ∀ p : P ermission, des.required p → allow dom p ∨ ∃ m : M ode, user dom p m

(2.10)

Every installed suite must be compatible with its associated protection domain, SuiteCompatible def = ∀ ms : Suite, s.suite ms → ms.descriptor ≀ ms.domain

(2.11)

2. Whenever there exists a running session, the suite identifier in s.session must correspond to an installed suite, CurrentInstalled def = ∀ ses : SessionInf o, s.session = Some ses → ∃ ms : Suite, s.suite ms ∧ ms.id = ses.id

(2.12)

3. The set of permissions granted for the session must be a subset of the permissions requested in the application descriptor of the active suite. In addition, the associated protection domain policy must allow those permissions to be granted at least in session mode, V alidSessionGranted def = ∀ ses : SessionInf o, s.session = Some ses → ∀ p : P ermission, ses.granted p → ∀ ms : Suite, s.suite ms → ms.id = ses.id → (ms.descriptor.required p ∨ ms.descriptor.optional p) ∧ ∃ m : M ode, user ms.domain p m ∧ session ≤m m

(2.13)

2.2 Formally verifying security properties of MIDP 2.0

21

Table 2.1: Security-related events Name start terminate request install remove

Description Start of session End of session Permission request MIDlet suite installation MIDlet suite removal

Type SuiteID → Event Event P ermission → option U serAnswer → Event SuiteID → Descriptor → Domain → Event SuiteID → Event

4. Every installed suite shall have a unique identifier, U niqueSuiteID def = ∀ ms1 ms2 : Suite, s.suite ms1 → s.suite ms2 → ms1 .id = ms2 .id → ms1 = ms2

(2.14)

5. For every installed suite with identifier id, the predicate s.granted id should be valid with respect to its descriptor and associated protection domain, V alidGranted def = ∀ (p : P ermission)(id : SuiteID), s.granted id p → ∀ ms : Suite, s.suite ms → ms.sid = id → (ms.descriptor.required p ∨ ms.descriptor.optional p) ∧ user ms.domain p blanket

(2.15)

6. A granted permission shall not be revoked at the same time and viceversa. This condition is formalized by the predicate V alidGrantedRevoked. Events We define a set Event for those events that are relevant to our abstraction of the device state (Table 2.1). The user may be presented with the choice between accepting or refusing an authorization request, specifying the period of time their choice remains valid. The outcome of a user interaction is represented using the type U serAnswer with constructors ua allow, ua deny : M ode → U serAnswer (2.16) The behaviour of the events is specified by their pre- and postconditions given by the predicates P re and P ost respectively. Preconditions (Table 2.2) are defined in terms of the device state while postconditions (Table 2.3) are defined in terms of the before and after states and an optional response which is only meaningful for the request event and indicates whether the requested operation is authorized, P re : State → Event → P rop P ost : State → State → option Response → Event → P rop

(2.17)

Example 2.2.2. Consider an event representing a permission request for which the user denies the authorization. Such an event can only occur when the active suite has

22

Formal analysis of security models for mobile devices

Table 2.2: Event preconditions P re s (start id) def = s.session = N one ∧ ∃ ms : Suite, s.suite ms ∧ ms.id = id P re s terminate def = s.session 6= N one P re s (install id des dom) def = des ≀ dom ∧ ∀ ms : Suite, s.suite ms → ms.id 6= id. P re s (remove id) def = (∀ ses : SessionInf o, s.session = Some ses → ses.id 6= id) ∧ ∃ ms : Suite, s.suite ms ∧ ms.id = id

Table 2.3: Event postconditions P ost s s′ r (start id) def = r = N one ∧ s ≡session s′ ∧ ∃ ses′ , s′ .session = Some ses′ ∧ ses′ .id = id ∧ ∀ p : P ermission, ¬ses′ .granted p ∧ ¬ses′ .revoked p P ost s s′ r terminate def = r = N one ∧ s ≡session s′ ∧ s′ .session = N one P ost s s′ r (install id des dom) def = r = N one ∧ (∀ ms : Suite, s.suite ms → s′ .suite ms) ∧ (∀ ms : Suite, s′ .suite ms → s.suite ms ∨ ms = hid, dom, desi) ∧ s′ .suite hid, dom, desi ∧ s′ .session = s.session ∧ (∀ p : P ermission, ¬s′ .granted id p ∧ ¬s′ .revoked id p) ∧ (∀ id1 : SuiteID, id1 6= id → s′ .granted id1 = s.granted id1 ∧ s′ .revoked id1 = s.revoked id1 ) P ost s s′ r (remove id) def = r = N one ∧ s ≡suite s′ ∧ (∀ ms : Suite, s.suite ms → ms.id 6= id → s′ .suite ms) ∧ (∀ ms : Suite, s′ .suite ms → s.suite ms ∧ ms.id 6= id)

2.2 Formally verifying security properties of MIDP 2.0

23

P re s (request p N one) := ∃ ses, s.session = Some ses ∧ ∀ ms, s.suite ms → ms.sid = ses.sid → (ms.descriptor.required p ∨ ms.descriptor.optional p) → ∀ m, user ms.domain p m → ses.granted p ∨ ses.revoked p ∨ s.granted ses.sid p ∨ s.revoked ses.sid p P ost s s′ r (request p N one) := s = s′ ∧ ∀ ses ms, s.session = Some ses → s.suite ms → ms.sid = ses.sid → (¬ms.descriptor.required p → ¬ms.descriptor.optional p → r = Some denied) ∧ (ms.descriptor.required p ∨ ms.descriptor.optional p → (allow ms.domain p → r = Some allowed) ∧ (ses.granted p → r = Some allowed) ∧ (s.granted ses.sid p → r = Some allowed) ∧ (ses.revoked p → r = Some denied) ∧ (s.revoked ses.sid p → r = Some denied)) Figure 2.2: Specification of event request when user intervention is not required declared the requested permission in its descriptor and is bound to a protection domain that specifies a user interaction mode for that permission (otherwise, the request would be immediately accepted or rejected). Furthermore, the requested permission must not have been revoked or granted for the rest of the session or the rest of the suite’s life, P re s (request p (Some (ua deny m))) def = ∃ ses : SessionInf o, s.session = Some ses ∧ ∀ ms : Suite, s.suite ms → ms.id = ses.id → (ms.descriptor.required p ∨ ms.descriptor.optional p) ∧ (∃ m1 : M ode, user ms.domain p m1 ) ∧ ¬ses.granted p ∧ ¬ses.revoked p ∧ ¬s.granted ses.id p ∧ ¬s.revoked ses.id p

(2.18)

When m = session, the user revokes the permission for the whole session, therefore, the response denies the permission and the state is updated accordingly, P ost s s′ r (request p (Some (ua deny session))) def = r = Some denied ∧ s ≡session s′ ∧ ∀ ses : SessionInf o, s.session = Some ses → ∃ ses′ : SessionInf o, s′ .session = Some ses′ ∧ ses′ ≡revoked ses ∧ ses′ .revoked p ∧ (∀ q : P ermission, q 6= p → ses′ .revoked q = ses.revoked q)

(2.19)

The full specification of event request is divided into three cases: i) when user intervention is not required (Figure 2.2). The permission must have been previously granted or revoked by the rest of the session or indefinitely; ii) when intervention is required and the user accepts the request (Figure 2.3). In this case, the state changes if the user gives permission to the suite by session or indefinitely; and iii) when intervention

24

Formal analysis of security models for mobile devices

P re s (request p (Some (ua allow m))) := ∃ ses, s.session = Some ses ∧ ∀ ms, s.suite ms → ms.sid = ses.sid → (ms.descriptor.required p ∨ ms.descriptor.optional p) ∧ ¬ses.granted p ∧ ¬ses.revoked p ∧ ¬s.granted ses.sid p ∧ ¬s.revoked ses.sid p ∧ (∃ m′ , user ms.domain p m′ ) ∧ (∀ m′ , user ms.domain p m′ → m ≤m m′ ) P ost s s′ r (request (Some (ua allow oneshot))) := r = Some allowed ∧ s = s′ P ost s s′ r (request (Some (ua allow session))) := r = Some allowed ∧ s ≡session s′ ∧ ∀ ses, s.session = Some ses → ∃ ses′ , s′ .session = Some ses′ ∧ ses′ ≡granted ses ∧ ses′ .granted p ∧ (∀ q, q 6= p → ses′ .granted q = ses.granted q) P ost s s′ r (request (Some (ua allow blanket))) := r = Some allowed ∧ s ≡granted s′ ∧ ∀ ses, s.session = Some ses → s′ .granted ses.sid p ∧ (∀ q, q 6= p → s′ .granted ses.sid q = s.granted ses.sid q) ∧ (∀ id, id 6= ses.sid → s′ .granted id = s.granted id) Figure 2.3: Specification of event request when intervention is required and the user accepts the request

P re s (request p (Some (ua deny m))) := ∃ ses, s.session = Some ses ∧ ∀ ms, s.suite ms → ms.sid = ses.sid → (ms.descriptor.required p ∨ ms.descriptor.optional p) ∧ ¬ses.granted p ∧ ¬ses.revoked p ∧ ¬s.granted ses.sid p ∧ ¬s.revoked ses.sid p ∧ (∃ m′ , user ms.domain p m′ ) P ost s s′ r (request (Some (ua deny oneshot))) := r = Some denied ∧ s = s′ P ost s s′ r (request (Some (ua deny session))) := r = Some denied ∧ s ≡session s′ ∧ ∀ ses, s.session = Some ses → ∃ ses′ , s′ .session = Some ses′ ∧ ses′ ≡revoked ses ∧ ses′ .revoked p ∧ (∀ q, q 6= p → ses′ .revoked q = ses.revoked q) P ost s s′ r (request (Some (ua deny blanket))) := r = Some denied ∧ s ≡revoked s′ ∧ ∀ ses, s.session = Some ses → s′ .revoked ses.sid p ∧ (∀ q, q 6= p → s′ .revoked ses.sid q = s.revoked ses.sid q) ∧ (∀ id, id 6= ses.sid → s′ .revoked id = s.revoked id) Figure 2.4: Specification of event request when intervention is required and the user denies the request

2.2 Formally verifying security properties of MIDP 2.0

25

e2 /r2 e3 /r3 en−1 /rn−1 terminate/rn start id/r1 s0 ֒− −−−−−−→ s1 ֒−−−→ s2 ֒−−−→ · · · ֒−−−−−−−→ sn−1 ֒−−−−−−−−→ sn

Figure 2.5: A session for a suite with identifier id is required and the user denies the request (Figure 2.4). The state changes if the user denies permission to the suite by session or indefinitely. One-step execution The behavioural specification of the execution of an event is given by the ֒→ relation with the following introduction rules: ¬P re s e npre e/N one s ֒− −−−→ s

P re s e P ost s s′ r e pre e/r ′ s ֒− −→ s

(2.20)

Whenever an event occurs for which the precondition does not hold, the state must remain unchanged. Otherwise, the state may change in such a way that the event e/r ′ postcondition is established. The notation s ֒− −→ s may be read as “the execution of ′ the event e in state s results in a new state s and produces a response r”. Sessions A session is the period of time spanning from a successful start event to a terminate event, in which a single suite remains active. A session for a suite with identifier id (Figure 2.5) is determined by an initial state s0 and a sequence of steps hei , si , ri i (i = 1, . . . , n) such that the following conditions hold, • e1 = start id ; • P re s0 e1 ; • ∀ i ∈ {2, . . . , n − 1}, ei 6= terminate ; • en = terminate ; ei /ri • ∀ i ∈ {1, . . . , n}, si−1 ֒− −−→ si .

A partial session is a session for which the terminate event has not yet been elicited; it is defined inductively by the following rules, start id/r1 P re s0 (start id) s0 ֒− −−−−−−→ s1 psession start P Session s0 ([ ] ⊳ hstart id, s1 , r1 i) e/r ′ P Session s0 (ss ⊳ last) e 6= terminate last.s ֒− −→ s psession app ′ P Session s0 (ss ⊳ last ⊳ he, s , ri)

(2.21)

(2.22)

Now, sessions can be easily defined as follows, terminate/r ′ P Session s0 (ss ⊳ last) last.s ֒− −−−−−−→ s session terminate Session s0 (ss ⊳ last ⊳ hterminate, s′ , ri)

(2.23)

26

2.2.2

Formal analysis of security models for mobile devices

Verification of security properties

This section is devoted to establishing relevant security properties of the model. Proofs are merely outlined; however, all proofs have been formalized in Coq and are available as part of the full specification. An invariant of one-step execution We call one-step invariant a property that remains true after the execution of every event if it is true before. We show next that the validity of the device state, as defined in Section 2.2.1, is a one-step invariant of our specification. Theorem 2.2.3. Let V alid be a predicate over State defined as the conjunction of the validity conditions in Section 2.2.1. For any s s′ : State, r : option Response and e/r ′ ′ e : Event, if V alid s and s ֒− −→ s hold, then V alid s also holds. e/r ′ ′ ′ Proof. By case analysis on s ֒− −→ s . When P re s e does not hold, s = s and s is valid because s is valid. Otherwise, P ost s s′ r e must hold and we proceed by case analysis on e. We will only show the case request p (Some (ua deny session)), obtained after further case analysis on a when e = request p (Some a). The postcondition (2.19) entails that s ≡session s′ , that the session remains active, and that ses′ ≡revoked ses. Therefore, the set of installed suites remains unchanged (s′ .suite = s.suite), the set of permissions granted for the session does not change (ses′ .granted = ses.granted) and neither does the set of permissions granted or revoked in blanket mode (s′ .granted = s.granted, s′ .revoked = s.revoked). From these equalities, every validity condition of the state s′ except V alidGrantedRevoked s′ follows immediately from the validity of s. We next prove V alidGrantedRevoked s′ . We know from the postcondition of the event that

∀ q, q 6= p → ses′ .revoked q = ses.revoked q

(2.24)

Let q be any permission. If q 6= p, then from (2.24) follows ses′ .revoked q = ses.revoked q and because q was not granted and revoked simultaneously before the event, neither it is afterwards. If q = p, then we know from the precondition (2.18) that p was not granted before and thus it is not granted afterwards. This proves V alidGrantedRevoked s′ and together with the previous results, V alid s′ . Session invariants We call session invariant a step property that holds for the rest of a session once it is established in a step. Let P be a predicate over T , we define all P as an inductive predicate over snocList T by the following rules: all P [ ]

all nil

all P ss P s all snoc all P (ss ⊳ s)

(2.25)

Theorem 2.2.4. Let s0 be a valid state and ss a partial session starting from s0 , then every state in ss is valid, all (λ step . V alid step.s) ss

(2.26)

2.2 Formally verifying security properties of MIDP 2.0

27

Proof. By induction on the structure of P Session s0 ss. • When constructed using psession start, ss has the form [ ] ⊳ hstart id, s1 , r1 i and start id/r1 s0 ֒− −−−−−−→ s1 holds. We must prove all (λ step . V alid step.s) ([ ] ⊳ hstart id, s1 , r1 i)

(2.27)

By applying all app and then all nil the goal is simplified to V alid s1 and is proved start id/r1 from s0 ֒− −−−−−−→ s1 and V alid s0 by applying Theorem 2.2.3. • When it is constructed using psession app, ss has the form ss1 ⊳ last ⊳ he, s′ , ri e/r ′ and last.s ֒− −→ s holds. The induction hypothesis is all (λ step . V alid step.s) (ss1 ⊳ last)

(2.28)

and we must prove all (λ step . V alid step.s) (ss1 ⊳ last ⊳ he, s′ , ri). By applying all app and then (2.28) the goal is simplified to V alid s′ . From (2.28) we know e/r ′ that last.s is a valid state. The goal is proved from last.s ֒− −→ s and V alid last.s by applying Theorem 2.2.3.

The above theorem may be easily extended from partial sessions to sessions using Theorem 2.2.3 one more time. State validity is just a particular property that is true for a partial session once it is established, the result can be generalized for other properties as shown in the following lemma. Lemma 2.2.5. For any property P of a step satisfying ∀ (s s′ : State)(r r′ : option Response)(e e′ : Event), e′ /r′ ′ ′ ′ ′ e′ 6= terminate → s ֒− −−→ s → P he, s, ri → P he , s , r i ,

(2.29)

if P Session s0 (ss ⊳ step ⊕ ss1 ) and P step, then all P ss1 holds. Perhaps a more interesting property is a guarantee of the proper enforcement of revocation. We prove that once a permission is revoked by the user for the rest of a session, any further request for the same permission in the same session is refused. Lemma 2.2.6. The following property satisfies (2.29), (λ step . ∃ ses, step.s.session = Some ses ∧ ses.revoked p)

(2.30)

Theorem 2.2.7. For any permission p, if P Session s0 (ss⊳step⊳step1 ⊕ss1 ), step1 .e = request p (Some (ua deny session)) and P re step.s step1 .e, then all (λ step . ∀ o, step.e = request p o → step.r 6= Some allowed) ss1

(2.31)

Proof. Since P ost step.s step1 .s step1 .r step1 .e must hold, p is revoked for the session in step1 .s. From Lemmas 2.2.5 and 2.2.6, p remains revoked for the rest of the session. Let e = request p o be an event in a step step2 in ss1 . We know that p is revoked for the session in the state before step2 .s. If the precondition for e does not hold in the state before1 , then step2 .r = N one. Otherwise, e must be request p N one and its postcondition entails step2 .r = Some denied. 1

Actually, it holds only when o = N one.

28

Formal analysis of security models for mobile devices

2.2.3

Refinement

In the formalization described in the previous sections it has been specified the behaviour of events implicitly as a binary relation on states instead of explicitly as a state transformer. Moreover, the described formalization is higher-order because, for instance, predicates are used to represent part of the device state and the transition semantics of events is given as a relation on states. The most evident consequence of this choice is that the resulting specification is not executable. What is more, the program extraction mechanism provided by Coq to extract programs from specifications cannot be used in this case. However, had we constructed a more concrete specification at first, we would have had to take arbitrary design decisions from the beginning, unnecessarily restricting the allowable implementations and complicating the verification of properties of the security model. We will show in the rest of this section that it is feasible to obtain an executable specification from our abstract specification. The methodology we propose produces also a proof that the former is a refinement of the latter, thus guaranteeing soundness of the entire process. The methodology is inspired by the work of Spivey [69] on operation and data refinement, and the more comprehensive works of Back and von Wright [70] and Morgan [71] on refinement calculus. Executable specification In order to construct an executable specification it is first necessary to choose a concrete representation for every object in the original specification not directly implementable in a functional language. In particular, the transition relation that defines the behaviour of events implicitly by means of their pre- and postconditions must be refined to a function that deterministically computes the outcome of an event. At this point, it is unavoidable to take some arbitrary decisions about the exact representation to use. For example, a decidable predicate P on A might be represented as a function from A to a type isomorphic to bool, as an exhaustive list of the elements of A that satisfy the predicate (when P has finite support), or in some other equally expressive way. For every type T in the abstract specification, we will denote its concrete model as T . Let a : A and a : A; we will indicate that a is a refinement of a as a ⊑ a. In our case, every predicate to be refined is decidable and is satisfied only by a finite subset of elements in its domain (they are all characteristic predicates of finite sets). Let P be one of such predicates on a set A and let l be a list of elements of A; we will say that l refines P whenever (∀ a, P a → ∃ a, a ∈ l ∧ a ⊑ a) ∧ (∀ a, a ∈ l → ∃ a, P a ∧ a ⊑ a)

(2.32)

where x ∈ l means that there exists at least one occurrence of x in l. When A and A coincide, ⊑ is the equality relation on A, and the condition (2.32) simplifies to ∀ a, P a ↔ a ∈ l. Let a : A and a : A be such that a ⊑ a; we define (N one : option A) ⊑ (N one : option A) Some a ⊑ Some a

(2.33)

The above concrete representations can be used to obtain a concrete model for the

2.2 Formally verifying security properties of MIDP 2.0 ֒→

s

State

s′



State

29

⊑ interp

s

s′

Figure 2.6: Simulation relation between interp and the relation ֒→. Given states s, s and events e, e such that s ⊑ s and e ⊑ e, for every state s′ and response r computed by interp there must exist a corresponding abstract state s′ refined by s′ reachable from s by the ֒→ relation with the same response device state and the security-related events: State := [[ suite session granted revoked start terminate request install remove

: : : : :

: list Suite, : option SessionInf o, : SuiteID → list P ermission, : SuiteID → list P ermission ]]

SuiteID → Event Event P ermission → option U serAnswer → Event SuiteID → Descriptor → Domain → Event SuiteID → Event

(2.34)

(2.35)

The refinement relation ⊑ can be naturally extended to states, events and the rest of the types used in the formalization. Soundness Having chosen a concrete representation for the objects in the specification, everything is set for specifying the behaviour of events as a function interp : State → Event → State × (option Response)

(2.36)

The soundness of the interp function w.r.t. the transition relation ֒→ is given by the following simulation condition, illustrated in Figure 2.6. !  !  ∀ (s : State) s : State (e : Event) e : Event (r : option Response), s⊑s→e⊑e→ e/r ′ let (s′ , r) := interp s e in ∃ s′ : State, s′ ⊑ s′ ∧ s ֒− −→ s

(2.37)

It can be shown that the refinement relation ⊑ on states satisfies ∀ s : State, ∃ s : State, s ⊑ s

(2.38)

30

Formal analysis of security models for mobile devices

Thus, the existential quantifier in (2.37) may be replaced by a universal quantifier to obtain the stronger (but sometimes easier to prove) condition: !  !  ∀ (s s′ : State) s : State (e : Event) e : Event (r : option Response), s⊑s→e⊑e→ (2.39) e/r ′ ′ ′ ′ let (s , r) := interp s e in s ⊑ s → s ֒−−→ s With a function interp satisfying either (2.37) or (2.39) and a concrete initial state s0 that refines an initial abstract state s0 , the Coq program extraction mechanism can be used to produce an executable prototype of the MIDP 2.0 security model in a functional language such as OCaml, Haskell or Scheme.

2.3

A certified access controller for MIDP 2.0

This section reports work, based on [66], concerning the development of a high level formal specification of an Access Control module (AC) of JME - MIDP 2.0. In particular, a certified algorithm that satisfies the proposed specification of an AC is described.

2.3.1

Formalization of the access control module

The specification of the module has been defined as a conservative extension of the model presented in Section 2.2.1. The set methodID of (method) identifiers that compose a MIDlet suite and the set f unctionID of functions (or APIs) in a mobile device are introduced. The definition of a Suite is extended with a predicate that characterizes the methods that belong to that suite: Suite def = [[ id domain descriptor methodid

: SuiteID, : Domain, : Descriptor, : methodID → P rop ]]

(2.40)

Then, the state of the device is extended with the following predicates: one that specifies the functions or APIs registered in the device (f unction), one that describes the sensitive functions registered in the device (f unctionSensitive), and one that models the association of a permission to a sensitive function (f uncperm). Formally, State def = [[ suite session granted revoked f unction f unctionSensitive f uncperm

: Suite → P rop, : option SessionInf o, : SuiteID → P ermission → P rop, : SuiteID → P ermission → P rop, : f unctionID → P rop, : f unctionID → P rop, : f unctionID → P ermission → P rop ]]

(2.41)

A state now is said to be a valid one if it satisfies the conditions described in Section 2.2.1 and the following ones (for details see [66]): • M IDletsAtLeastOneM ethod: every installed MIDlet has at least one method; • f AtLeastOneP erm: every installed sensitive function necessarily has associated a permission;

2.3 A certified access controller for MIDP 2.0

31

• f P ermInstalledSens: if a function has associated a permission then that function must be installed in the device and is sensitive; • f U niqueP ermission: no function has associated more than one permission; • methodInOnlyOneM idlet: every method identifier is unique, even in distinct midlets; • permStateCoherence: if one permission was granted (revoked) to a MIDlet for the rest of its life cycle, then neither could have been revoked (granted), for the rest of its life cycle, nor could have been granted or revoked for the rest of the session. Likewise, if one permission was granted (revoked) for the rest of the session, then neither it could have been revoked (granted), for the rest of the session, nor could have been granted or revoked for the rest of its life cycle life; and, • policyCompatible: if the security policy does not mention any association between the protection domain of a suite and a certain permission then that permission can not be registered as granted or revoked to that suite in the device. As to the events, the type Event is now extended with a new constructor, call, call : methodID → f unctionID → Event

(2.42)

which represents a method call to a function of the device. The behaviour of this kind of event is formalized using the pre- and postconditions explained in what follows. Let suite be a middlet suite, meth be a method, func be a function invoked by that method, and dom be the protection domain associated to suite. Then, the precondition of call P re s (call meth f unc) def = P reCall meth ∧ (s.f unctionSensitive f unc → ∃ perm : P ermission, s.f uncperm f unc perm ∧ P reRequestN one perm ∧ ∀ (ua : U serAnswer) (m : M ode), (ua = uaAllow m → P reRequestU serAllow perm m) ∧ (ua = uaDeny m → P reRequestU serDeny perm))

(2.43)

requires in the first place (P reCall) that it should exist an active session, that suite is the active suite and meth belongs to it. Then, if f unc is a sensitive function, there must exist a permission perm associated to the function such that perm has been declared as required or optional inside the descriptor of suite. In the case no user intervention is required, the precondition specifies (P reRequestN one) that the security policy either unconditionally allows dom to access f unc, or dom specifies a user interaction mode for perm, and the user has granted a permission which is still valid. If user intervention is required and the user denies access, the precondition establishes (P reRequestU serDeny) that dom specifies a user interaction mode for perm, and the user has not granted any permission that is still valid. If user intervention is required and the user allows the access the precondition additionally requires (P reRequestU serAllow) the user authorization mode to be less, according to the order defined on permission modes, than the maximum allowed mode specified in the security policy.

32

Formal analysis of security models for mobile devices The postcondition of the call event has been formally defined in Coq as follows P os s s′ r (call meth f unc) def = (s.f unctionSensitive f unc → ∀ perm : P ermission, s.f uncperm f unc perm → P osRequestN one perm ∨ ∃ (ua : U serAnswer) (m : M ode), (ua = uaAllow m ∧ P osRequestU serAllow perm m) ∨ (ua = uaDeny m ∧ P osRequestU serDeny perm m)) ∧ ¬s.f unctionSensitive f unc → s = s′ ∧ r = Some allowed

(2.44)

In the case no user intervention is required, the postcondition (P osRequestN one) specifies the response of the access controller when a call event is executed, namely, if the user has previously allowed (denied) authorization and it is still valid then the module will respond allowing (denying) the access to the function. If the security policy specifies that the access should be allowed unconditionally, then the module will respond allowing permission. In the case that is required user intervention and the user grants the permission, the specification establishes (P osRequestU serAllow) that the response of the controller should be to allow the access. Similarly, if the permission is granted at session level (session mode) or for the whole life cycle of the application (blanket mode), this authorization should be registered in the device. If it is granted in oneshot mode, then the entire state of the device remains unchanged and the controller just reacts granting the access. P osRequestU serDeny is similar to P osRequestU serAllow but the controller shall deny the access and the update will be on the fields that register the access denial (depending on whether the denial is for the rest of the session or for the rest of the life cycle of the suite).

2.3.2

Verification of security properties

The invariants satisfied by the core model, which are described in Section 2.2.2, are preserved in the extended model, including those regarding the validity of the state of the device. The proof, for instance, that the execution of any event preserves the compatibility property of installed MIDlet suites described in (2.11), follows by first proving that the call event does not modify the state of the device and then proceeding as shown in Section 2.2.2. In what follows only two of the properties, permStateCoherence and policyCompatible, described in Section 2.3.1 are formally stated (Lemmas 2.3.1 and 2.3.2, respectively) and hints are provided on how it was proved they are satisfied by (in particular) a call event execution. For the complete set of analyzed properties, including their proofs, the interested reader is referred to [66, 72]. Lemma 2.3.1. e/r ′ ∀(s s′ : State)(e : Event)(r : option Response) V alid s → s ֒− −→ s → ∀(sid : SuiteID)(p : P ermission)(ses : SessionInf o), s′ .session = Some ses → ses.sid = sid → (s′ .granted sid p → ¬s′ .revoked sid p ∧ ¬ses.granted p ∧ ¬ses.revoked p) ∧ (s′ .revoked sid p → ¬s′ .granted sid p ∧ ¬ses.granted p ∧ ¬ses.revoked p) ∧ (ses.granted p → ¬s′ .granted sid p ∧ ¬s′ .revoked sid p ∧ ¬ses.revoked p) ∧ (ses.revoked p → ¬s′ .granted sid p ∧ ¬s′ .revoked sid p ∧ ¬ses.granted p) (2.45)

2.3 A certified access controller for MIDP 2.0 Proof. Each of the four subproperties (propositions in the conjunction formula) is proved following a similar strategy, after the execution of an event (particularly a call). In the case that no user intervention is required, either because is established by the security policy or because the invoked function is not sensitive, the proof follows directly because the state of the device remains unchanged. In the case that user intervention is required, the proof proceeds by performing case analysis on the response provided by the user to the permission request (allow or deny in oneshot, session and blanket modes). Lemma 2.3.2. e/r ′ ∀(s s′ : State)(e : Event)(r : option Response) V alid s → s ֒− −→ s → ∀(sid : SuiteID)(p : P ermission)(ses : SessionInf o), s′ .session = Some ses → ses.sid = sid → ∀ ms : Suite, s′ .suite ms → ms.sid = sid → (¬policy.allow ms.domain p ∧ ¬∃ m : M ode, policy.user ms.domain p m) → ¬s′ .granted s sid p ∧ ¬s′ .revoked s sid p ∧ ¬ses.granted p ∧ ¬ses.revoked p (2.46)

Proof. In the case that user intervention is not required in the execution of an event (particularly a call), the proof follows directly because the state of the device remains unchanged. Otherwise, the proof follows by contradiction, because a request for user intervention would contradict the hypothesis that there is no relation in the security policy between the permission and the protection domain of the active suite.

2.3.3

A certified access controller

The high level formalization that has been described in the previous sections is appropriate and general: it is a good setting to reason about the properties of the MIDP 2.0 security model and, in particular, about the behavior of the access control module without conditioning possible implementations. However, in order to construct and certify an algorithm that satisfies the access control behavior that has been specified, first the high level model has been refined into a more computationally oriented one and then the soundness of the refinement process has been proved. Then, the algorithm was defined using the functional language of Coq and a theorem that establishes that the obtained algorithm satisfies the concrete specification of the access controller has been formally proved with the help of the proof assistant. Model refinement Following the methodology described in Section 2.2.3, the refinement of the high level specification has been engineered by providing a representation for both the state of the device and the events that can be directly implementable using a high level functional language. In addition to that, the relation which specifies the behavior of the events has been refined down to a deterministic function that computes explicitly the state transformation specified by those events. For the refinement of a decidable predicate P over a finite set A, it has been adopted the following approach: if the intended meaning of P is to characterize a set, then it shall be represented using lists of elements of type A, as in programming languages. If the intended meaning is that of a decision procedure, it shall be represented as a boolean function over type A. To illustrate how the components

33

34

Formal analysis of security models for mobile devices

of the concrete model look like, it follows the definition of the set State, which should be grasped as the concrete representation of the state of the device. State def = [[ suite session granted revoked f unction f unctionSensitive f uncperm

: list Suite, : option SessionInf o, : SuiteID → P ermission → bool, : SuiteID → P ermission → bool, : f unctionID → bool, : f unctionID → bool, : f unctionID → P ermission ]]

(2.47)

The algorithm Before proceeding to describe the algorithm, it is worth recalling that each method inherits or gets the protection domain that has been associated to the MIDlet suite to which the method belongs. Therefore, every method of a middlet suite is related to the permissions that the protection domain embodies. The algorithm has been built as a quite direct implementation of the specified behavior of the access controller in the concrete model. Basically, when a method invokes a protected function or API the access controller checks that the method has the corresponding permission and that the response from the user, if required, permits the access. If this checkup is successful, the access is granted; otherwise, the action is denied and a security exception is launched. The complete definition of the algorithm in Coq can be found in the Coq repository specified in Section 2.1.3; here it is only included a pseudocode version. It can be noticed that its structure is that of a case expression where the guards of the branches express properties involving some of the following components: the state, the policy of the device, the method and the invoked function. The certification The certification of the algorithm that implements a correct access controller reduces to prove the (correctness) theorem that establishes the following property: if the precondition of the call event is satisfied by a given state, then the state resulting from executing the algorithm satisfies its corresponding postcondition. Formally this is stated as follows: Theorem 2.3.3. ∀ (s s′ : State) (policy : P olicy) (ua : U serAnswer) (m : methodID) (f : f unctionID) (r : option Response), P re s (call m f ) → accessCtrlExe s policy ua m f = (s′ , r) → P os s s′ r (call m f )

(2.48)

The proof follows directly the structure of the algorithm and is constructed by performing case analysis on the conditions of the branches, by resorting to the use of several properties of the model, in particular those related to the validity of the state of the device.

2.3 A certified access controller for MIDP 2.0 Algorithm 1 accessCtrlExe (s : State) (policy : P olicy) (ua : U serAnswer) (m : methodID) (f : f unctionID) : State × (option Response) := (∗ current M IDlet suite in s ∗) let ms := getM IDletSuiteInSession s in (∗ permission associated to the f unction f ∗) let p := f unctionT oP ermission s f in (∗ m is not a method in ms ∗) case (methInCurrSuite s m) = f alse : (s, N one) (∗ f is not a sensitive f unction registered in the device ∗) case (s.f unctionSensitive f = f alse) : (s, Some allowed) (∗ p is not a permission declared in the descriptor of ms ∗) case (ms.descriptor.required p ∨ ms.descriptor.optional p) = f alse : (s, Some denied) (∗ the permission p has been granted (revoked) previously in blanket mode ∗) case s.granted s.session.sid p = true : (s, Some allowed) case s.revoked s.session.sid p = true : (s, Some denied) (∗ the permission p has been granted (revoked) previously in session mode in the current session ∗) case s.session.granted p = true : (s, Some allowed) case s.session.revoked p = true : (s, Some denied) (∗ the policy grants unconditionally the permission p to M IDlet ms ∗) case policy.allow ms.domain p = true : (s, Some allowed) (∗ the protection domain specif ies interaction with the user f or permission p, which does not have a valid authorization (unauthorization) in s ∗) case (policy.user ms.domain p oneshot ∨ policy.user ms.domain p session ∨ policy.user ms.domain p blanket) = true : (∗ the permission p is allowed (denied) in one of three modes. In session and blanket modes, the authorization (unauthoritization) is registered in the state ∗) { case ua = uaAllow oneshot : (s, Some allowed) case ua = uaAllow session : case ua = uaAllow blanket : case ua = uaDeny oneshot : case ua = uaDeny session :

(s except s.session.granted p = true, Some allowed) (s except s.granted (s.session.sid) p = true, Some allowed) (s, Some denied) (s except s.session.revoked p = true, Some denied)

case ua = uaDeny blanket : (s except s.revoked (s.session.sid) p = true, Some denied)) } (∗ the protection domain does not grant permission p unconditionally and does not specif y any user interaction f or it ∗) case (policy.user ms.domain p oneshot ∨ policy.user ms.domain p session ∨ policy.user ms.domain p blanket) = f alse : (s, Some denied)

35

36

Formal analysis of security models for mobile devices

2.4

Formally verifying security properties of MIDP 3.0

Two important enhancements in MIDP 3.0 (informal) specification are Inter-MIDlet Communication (IMC) and Events. In particular, the IMC protocol enables MIDP 3.0 applications to communicate and collaborate among themselves. The MIDP 3.0 Event framework is a publish and subscribe architecture that allows MIDlets to push data to all authorized subscribers. MIDP 3.0 provides, in particular, the following capabilities: i) enable and specify proper behavior for MIDlets, such as: allow multiple concurrent MIDlets, and allow inter-MIDlet communications (direct communication, and indirect using events); ii) enable shared libraries for MIDlets; and iii) increase functionality in all areas, including: secure RMS stores, removable/remote RMS stores, IPv6, and multiple network interfaces per device. In this section the formalization described in Section 2.2 is extended so as to model security at the application level, introduced in Section 2.1.1. The extended device state, a new condition for state validity, and the authorization event are presented. The validity of the security properties already proved for MIDP 2.0, and the new properties related with MIDP 3.0 are examined. The content of this section is based on [67].

2.4.1

Security at the application level

New data types are introduced: V endor denotes the names of the MIDlet suite vendors, Certif icate represents the set of public key certificates, and Signature the signature of the MIDlet suite archive. The application descriptor is extended with the vendor name and, optionally, its public key certificate and signature. A MIDlet suite declares access authorization to each set of suites involved. DomainAuthorization authorizes access for all MIDlet suites from a specific domain. SignerAuthorization authorizes access to a set of suites signed by a given signer certificate. V endorU nsignedAuthorization gives access to the set of unsigned vendor suites, while vendorSignedAuthorization indicate access authorization to the set of MIDlet suites signed by the specified signing certificate of a vendor. Descriptor def = [[ required optional vendor jarSignature signerCertif icate domainAuthorization signerAuthorization vendorU nsignedAuthorization vendorSignedAuthorization

2.4.2

: P ermission → P rop, : P ermission → P rop, : V endor, : option Signature, : option Certif icate, : Domain → P rop, : Certif icate → P rop, : V endor → P rop, : V endor → Certif icate → P rop ]]

(2.49)

New device state

In order to represent allowed and denied access authorizations, the device state is also extended. Allowed access authorizations are represented with the authorized field while the denied ones are represented with the unauthorized one.

2.4 Formally verifying security properties of MIDP 3.0

State def = [[ suite session granted revoked authorized unauthorized

: SuiteID → P rop, : option SessionInf o, : SuiteID → P ermission → P rop, : SuiteID → P ermission → P rop, : SuiteID → SuiteID → P rop, : SuiteID → SuiteID → P rop ]]

37

(2.50)

A validity condition is added to the device state regarding access authorizations. V alidAuthorization establishes that a MIDlet suite can not be simultaneoussly authorized and unauthorized by another suite. V alidAuthorization def = ∀ idGrn idReq : SuiteID, s.authorizedidGrnidReq → ¬s.unauthorizedidGrnidReq

2.4.3

(2.51)

Application level access request

A new event called authorization models the access authorization request from a MIDlet suite to the shared resources of the active suite. authorization : SuiteID → Event

(2.52)

The precondition for this event establishes that in the state s an active suite must exist, and the identifier of the requesting MIDlet suite idReq belongs to an installed MIDlet suite. P re s (authorization idReq) def = ∀ ses : SessionInf o, s.session = Some ses → ∃ msReq : Suite, s.suite msReq ∧ msReq.id = idReq

(2.53)

The postcondition establishes the possible device answers and the relation between the previous state If the requesting MIDlet suite idReq was already authorized, the access is granted and the device state remains unchanged. If the requesting MIDlet suite idReq was not already authorized but there is a match between its credentials and any of the access authorizations declared by the active MIDlet suite, the access is also granted. This time the device state is modified since idReq becomes part of the set of authorized MIDlet suites of the active suite. P ost s s′ (Some allowed) (authorization idReq) def = ∀ ses : SessionInf o, s.session = Some ses → (s.authorized ses.id idReq ∧ s = s′ ) ∨ (¬s.unauthorized ses.id idReq ∧ AccessAuthorization s ses.id idReq ∧ s ≡authorized s′ ∧ s′ .authorized ses.id idReq) (2.54) If the requesting MIDlet suite idReq was already unauthorized, the access is denied. If idReq was not authorized and there is a mismatch between its credentials and the access authorizations declared by the active MIDlet suite, the access is denied as well. In this opportunity, the state is also modified since idReq is added to the set of unauthorized MIDlet suites of the active suite.

38

Formal analysis of security models for mobile devices

P ost s s′ (Some denied) (authorization idReq) def = ∀ ses : SessionInf o, s.session = Some ses → (s.unauthorized ses.id idReq ∧ s = s′ )∨ (¬s.authorized ses.id idReq ∧ ¬AccessAuthorization s ses.id idReq ∧ s ≡unauthorized s′ ∧ s′ .unauthorized ses.id idReq) (2.55) The predicate AccessAuthorization represents the comparison, in a given state, between the access authorizations declared by the first MIDlet suite and the credentials of the second one AccessAuthorization : State → SuiteID → SuiteID → P rop

(2.56)

An authorization is granted when at least one of the following conditions is verified: • IsDomainAuthorized: the protection domain of the applicant suite msReq satisfies the domainAuthorization predicate of the msGrn suite that shares the resource msGrn.descriptor.domainAuthorization (msReq.domain)

(2.57)

• IsV endorSignedAuthorized: the vendor and the certificate of the applicant suite msReq satisfy the vendorSignedAuthorization predicate of the msGrn suite that shares the resource msGrn.descriptor.vendorSignedAuthorization (msReq.vendor msReq.signerCertif icate)

(2.58)

• IsSignerAuthorized: the certificate of the applicant suite msReq satisfies the signerAuthorization predicate of the msGrn suite that shares the resource msGrn.descriptor.signerAuthorization (msReq.signerCertif icate)

(2.59)

• IsV endorU nsignedAuthorized: the vendor’s name of the applicant suite msReq satisfies the vendorU nsignedAuthorization predicate of the msGrn suite that shares the resource msGrn.descriptor.vendorU nsignedAuthorization (msReq.vendor)

2.4.4

(2.60)

Conservative extension of security properties

Several changes are introduced in the formalization of the security model of MIDP 2.0 to incorporate the new aspects of MIDP 3.0. In the first place, the device state is modified so as to keep a record of the authorized and unauthorized MIDlet suites. In the second place, the access authorization is introduced as a new event. Finally, a new condition for the state validity is added. In this new context, it is essential to prove that the device state, extended with the record of authorized and unauthorized MIDlet suites, is still valid after any event execution, even the authorization request. The state validity property must be considered with the new condition added. Lemma 2.4.1 shows that the resulting state of executing any event in a valid initial state is still a valid state.

2.4 Formally verifying security properties of MIDP 3.0

39

Lemma 2.4.1. Let ExtV alid be a predicate over State defined as the conjunction of the of the predicates V alid and V alidAuthorizationvalidity. For every s s′ : State, r : e/r ′ ′ option Response, and e : Event, if ExtV alid s and s ֒− −→ s hold, then ExtV alid s also holds. e/r ′ Proof. The proof follows by case analysis on s ֒− −→ s . When P re s e does not hold, ′ s = s . From this equality and ExtV alid s then ExtV alid s′ . Otherwise, P ost s s′ r e must hold and we proceed by case analysis on e. Only the event authorization idReq is considered here, in particular when the authorization is granted. Case analysis in the postcondition (2.54) of the event is carried out. In the first case, if the suite is authorized in the state s, the postcondition establishes s = s′ . Being ExtV alid s then ExtV alid s′ holds. In the second case, if the suite is unauthorized but there is a match between its credentials and any of the access authorizations declared by the active MIDlet suite in the state s, the postcondition establishes s ≡authorized s′ . From the previous equivalence and V alid s, follows V alid s′ . V alidAuthorization s′ remains to be proven. The equivalence s ≡authorized s′ preserves the unauthorized suites in the state s′ ; therefore, the suite is not unauthorized in the state s′ . Since the suite is authorized in s′ , V alidAuthorization s′ holds.

The state validity property is also invariant during a session. Lemma 2.4.2 shows that result. Lemma 2.4.2. Let ss be a session which starts in a valid state. Every state of ss is valid: all (λ step . ExtV alid step.s) ss (2.61) Device state invariants, such as state validity, are useful to analyze other relevant properties of the security model. For instance, having proved that nonauthorization is an invariant, if a MIDlet suite has been unauthorized in a session, then any further access authorization request of the same MIDlet suite will be denied. These two last properties are presented in the next two lemmas. Lemma 2.4.3. Given an unauthorized MIDlet suite identified by sidReq, and a session ss generated from a valid initial state, the following expression is a session invariant: all (λ step . ∃ ses, step.s.session = Some ses ∧ step.s.unauthorized ses.id sidReq) ss

(2.62)

Lemma 2.4.4. Given a MIDlet suite identified by sidReq, and a session (ss ⊳ stepi ⊳ stepi+1 ) ⊳ ss′ generated from a valid initial state, such that stepi+1 .e = authorization sidReq ∧ stepi+1 .r = Some denied ∧ P re stepi .s stepi+1

(2.63)

all (λ step . step.e = authorization sidReq → step.r = Some denied) ss′

(2.64)

Then,

The lemmas stated were formalized and proven in the Coq proof assistant. An algorithm for the access authorization of applications was developed and its correctness proven [73]. Additionally, certified algorithms we have developed for application installation and communication between applications (IMC protocol) in MIDP 3.0, as part of final degree projects [74, 75].

40

2.4.5

Formal analysis of security models for mobile devices

Weaknesses of the security mechanisms of MIDP 3.0

MIDP 3.0 allows component-based programming. An application can access a shareable component running in a different execution environment through thin client APIs, like the IMC protocol previously mentioned. The shareable component handles requests from applications following the authorization-based security policy. As described in Section 2.1.1, this policy considers authorization access declarations and the credentials shown by the potential client applications, to grant or deny access. The platform and application security levels were conceived as independent and complementary frameworks, however the (unsafe) interplay of some of the defined security mechanisms may lead to provoke (unexpected) violations to security policies. In what follows some potential weaknesses of MIDP 3.0 are put forward and a plausible (unsecure) scenario is discussed. In the first place, while at the platform level permissions are defined for each sensitive function of the device, at the application level access authorization declarations do not distinguish between different shareable components from the same MIDlet suite. In this way, once its credentials have been validated, a client application may have access to all the components shared by another application. In the second place, despite of the fact that permissions are granted in various modes (oneshot, session, blanket), access authorization is allowed exclusively in a permanent manner. Finally, at the platform level the bounding of an application to a protection domain is based on the X.509 Public Key Infrastructure. At the application level the same infrastructure is used to verify the integrity and authenticity of applications, except for one case. The declaration shown in (2.49) with the predicate vendorU nsignedAuthorization, has the vendor name as the unique credential needed to grant access authorization. Considering the previously established elements, the following scenario is feasible. Let A be an application which is bound to a certain protection domain and having a shareable component. The shareable component uses a sensitive function f of the device (granted by the protection domain) in order to implement its service. This application A declares access authorizations for unsigned applications from certain vendor V . Let B be an unsigned application which is bound to a different protection domain and that has been denied permission to access the sensitive function f . Now, if the application B is capable of providing just the vendor name V as credential, it would be granted permanent access to the shareable component. Thereby, B shall be able to access the sensitive function f of the device. This is a clear example of an unwanted behavior, where a sensitive function is accessed by an application without its permission. The next section shows the code of an unwanted access in MIDP 3.0. Two observations are drawn from the previous scenario. On the one hand, a stronger access authorization declaration is necessary. If declarations based only on the vendor name are left aside, all the remaining ones demand the integrity and authenticity of signatures and certificates. This will result in a more reliable security model. On the other hand, to avoid circumventing the security at the platform level two aspects should be considered. The first one is to declare the permissions of the protection domain exposed by a shareable component. The second aspect is to extend the security policy by conceding those permissions to sensitive functions when being accessed by applications through shareable resources. Considering the previously mentioned observations the formalization of the security

2.4 Formally verifying security properties of MIDP 3.0 model of MIDP 3.0 could be extended. This could be done by defining new conditions of the device state according to the security policy described above. It should also establish new pre and postconditions of the access authorization request. An Unwanted Access Example in MIDP 3.0 This example illustrates a possible scenario where an unstrusted application manages to access, presenting a false credential, a resource shared by another application. The RMS is a sensitive resource of the device that allows one to create and to access information persisted in a record store. Regardless of the protection domain to which they are linked, all applications have permissions to invoke the methods of the RMS. In MIDP 3.0 the application that is owner of a record store can share it with other applications and restrict access to it using the authorization mechanism described in this document. Provided by the RMS API, the openRecordStore method allows one to create and to access a record store. Depending on the purpose, each application uses a different signature of the same method. An application must use the following signature to create and share a record store in a restricted way. In particular, using the AU T HM ODE AP P LEV EL value on the third parameter of the method it is activated the access control mechanism based on authorizations: public static javax.microedition.rms.RecordStore openRecordStore( String recordStoreName, boolean createIfNecessary, int authmode, boolean writable)

A bank account store named BankAccountsDb, say, with security at the application level can be created as follows: rsPrv = RecordStore.openRecordStore ("BankAccountsDb", true, AUTHMODE_APPLEVEL, false);

To restrict access, the application must include access authorizations in the descriptor. The access declaration by vendor shown below authorizes all the applications made by the same vendor T rustyV endor that provides the application. MIDlet-Name: TrustyMIDlet MIDlet-Version: 1.0 MIDlet-Vendor: TrustyVendor MIDlet-Access-Authorization-1: vendor;TrustyVendor

An application that intends to access the shared resource must use the signature shown below of the openRecordStore method. It also requires to know the name of the shared resource and some data of the provider application, namely, its vendor and public name: public static javax.microedition.rms.RecordStore openRecordStore( String recordStoreName, String vendorName, String suiteName)

Now, let us suppose that an authorized third party seeks to obtain bank account information. For that, it develops an application without signatures or digital certificates, using as vendor’s name that of the owner of the record store. A fragment of the descriptor could be defined as follows:

41

42

Formal analysis of security models for mobile devices

MIDlet-Name: MaliciousMIDlet MIDlet-Version: 1.0 MIDlet-Vendor: TrustyVendor

Not being signed, the malicious application is installed and linked to the Unidentified Third Party Protection Domain. This protects the platform from the application performing an unauthorized access to sensitive resources of the device. However, this does not prevent access to the shared bank accounts store. The following Java code allows the untrusted application to open the record store and then to proceed to obtain, using the method getRecord, and to process, using the procedure processRecord, sensitive information: // open the shared store rsCns = RecordStore.openRecordStore( "BankAccountsDb", "TrustyVendor", "TrustyMIDlet" ); // proceed to read the records of the store int nextID = rs.getNextRecordID(); byte[] data = null; for( int id = 0; id < nextID; ++id ){ try { int size = rs.getRecordSize( id ); // obtain the size of the registry if( data == null || data.length < size ) data = new byte[ size ]; rs.getRecord( id, data, 0 ); // obtain the registry processRecord( rs, id, data, size ); // process the registry } catch( InvalidRecordIDException e ){ } catch( RecordStoreException e ){ handleError( rs, id, e ); } }

2.5

A framework for defining and comparing access control policies

An alternative model has been proposed that extends MIDP’s by introducing permissions with multiplicities and adding flexibility to the way in which permissions are granted by the user of the device and used by the applications running on it. This section presents a framework, formalized using Coq, suitable for defining and comparing the access control policies that can be enforced by (variants of) those security models and to prove desirable properties they should satisfy. The content of this section is based on [68].

2.5.1

An alternative security model for mobile devices

In [29], a security model for interactive mobile devices is put forward which can be grasped as an extension of that of MIDP. The work presented in this section has focused on developing a formal model for studying, in particular, interactive user querying mechanisms for permission granting for application execution on mobile devices. Like in the MIDP case, the notion of permission is central to this model. A generalisation of the one-shot permission described above is proposed that consists in associating to a permission a multiplicity which states how many times that permission can be used. The proposed model has two basic constructs for manipulating permissions: grant and consume. The grant construct models the interactive querying of the user, asking

2.5 A framework for defining and comparing access control policies

43

whether he grants a particular permission with a certain multiplicity. The consume construct models the access to a sensitive function which is protected by the security police, and therefore requires (consumes) permissions. A semantics of the model constructs is proposed as well as a logic for reasoning on properties of the execution flow of programs using those constructs. The basic security property the logic allows one to prove is that a program will never attempt to access a resource for which it does not have a permission. The authors also provide a static analysis that makes it possible to verify that a particular combination of the grantconsume constructs does not violate that security property. For developing that kind of analysis the constructs are integrated into a program model based on control-flow graphs. This model has also been used in previous work on modelling access control for Java, see for instance [76, 77]. One of the objectives of the work that is being reported here, has been to build a framework which would provide a formal setting to define the permission models defined by MIDP and the one presented in [29] (and variants of it) in an uniform way and to perform a formal analysis and comparison of those models. This framework, which is formally defined using the CIC, adopts, with variations, most of the security and programming constructions defined in [29]. In particular it has been modified so as to be parameterized by permission granting policies, while in the original work this relation is fixed.

2.5.2

A framework for access control modeling

This section introduces the formal setting used to define the security concepts that constitute the basis of certain access control mechanisms, to proceed then to describe how those mechanisms are used to define the permission granting models which are object of analysis of this work. Permissions Every (controlled) resource of the device is given a type. Let ResT ype be the set of types of resources. If rt is a resource type, Resources rt and Actions rt define the set of resources of type rt available on the device and the actions that can performed over them, respectively. The permissions of a resource type are defined as follows: P ermRes

def

(rt : ResT ype) =

| valid : list (Resources rt) → list (Actions rt) → P ermRes rt | invalid : P ermRes rt

(2.65)

That is, given a resource type rt, an object of type P ermRes rt is a set (represented by a list) of actions and resources over rt, or the constant invalid. A relation ⊑P ermRes is defined by applying set inclusion component-wise. This relation defines a lattice structure where invalid is the bottom element ⊥P ermRes and ⊔P ermRes a lub operator which is obtained applying set union component-wise. As already mentioned, a notion of multiplicity of granted permission is introduced in [29]. A multiplicity is defined to be either a natural number, a special value ∞ that

44

Formal analysis of security models for mobile devices

denotes irrestricted permission, or an error value ⊥. A type M ul is defined: def

M ul =

| ⊥ : M ul | val : nat → M ul | ∞ : M ul

(2.66)

It is straightforward to see that a lattice can be constructed over M ul with ⊥ and ∞ as the bottom and top elements, respectively. The obvious extensions of functions and predicates defined over naturals to functions and predicates over M ul, such as ⊑M ul , +M ul , −M ul , predM ul , are also defined. An accumulated permission for a resource type is comprised of two components: the set of resources and actions allowed and a multiplicity. One such permission (of resource type rt) is then grasped as an object of the following record type: def

P ermM ul (rt : ResT ype) = [[ permRes : P ermRes, rtmul : M ul ]]

(2.67)

The lattice of permissions of a resource type can be obtained by defining the order ⊑P ermM ul : pm1 ⊑P ermM ul pm2

def

=

pm1 .permRes ⊑P ermRes pm2 .permRes



pm1 .mul ⊑M ul pm2 .mul

(2.68)

where pm1 and pm2 are objects of type P ermM ul rt. Now, the permission state of the device is defined. One such state is ultimately a mapping that associates a permission to each resource type. Therefore, it is defined as the following dependent function type: def

P erm = ∀ (rt : ResT ype), P ermM ul rt

(2.69)

It is said that two permissions p1 and p2 are (extensionally) equal if for every resource type rt it holds that p1 rt = p2 rt. An order ⊑P erm can be defined as the product-wise extension of ⊑P ermM ul as follows: def

p1 ⊑P erm p2 = ∀ (rt : ResT ype), (p1 rt) ⊑P ermM ul (p2 rt)

(2.70)

In order to model the operations that affect the state of the permissions an update function is introduced: update (p : P erm)(rt : ResT ype)(pres : P ermRes rt)(m : M ul) : P erm

(2.71)

The intended (and formalized) behaviour of this function is that of a usual store updating operator: the permission state remains unchanged for every resource type different from rt, and for rt yields hpres, mi. This behavior is captured by the following two lemmas: Lemma 2.5.1. ∀ (p : P erm)(rt : ResT ype)(pres : P ermRes rt)(m : M ul), (update p rt pres m) rt = hpres, mi

(2.72)

Lemma 2.5.2. ∀ (p : P erm)(rt rt′ : ResT ype)(pres : P ermRes rt)(m : M ul), rt rt′ → (update p rt pres m) rt′ = p rt′

(2.73)

2.5 A framework for defining and comparing access control policies

45

If rt is a resource type and p a permission state, then the following inductive relation Error is defined (p rt).permRes = invalid rt Error p

(p rt).mul = ⊥ Error p

(2.74)

The intuition is that an error situation may occur when either there is an attempt to perform an action over a resource of type rt and no valid permission is associated to it (first rule) or when there are no granted permissions for that resource (second rule). Programs A program in, among others, [29, 78] is represented by a control-flow graph that captures the manipulations of permissions and the handling of method calls and returns as well as exceptions. A control-flow graph is a tuple G = (N O, EX, KD, T G, CG, EG, n0 ) where: • N O is the set of nodes of the graph (one for each instruction), • EX is the set of exceptions, • KD is a function of type KD : N O → Instr that associates each node to an instruction, • T G : N O → N O → P rop is the propositional function that characterizes the set of intra-procedural edges (i.e. n1 T G n2 if control can be transferred from instruction at node n1 to instruction at node n2 within the currect procedure), • CG is the set of inter-procedural edges (which can be used to capture dynamic method calls), • EG : EX → N O → N O → P rop are the intra-procedural exception edges, • n0 : N O is the graph entry node. The instructions are formally defined in the framework by means of the following inductive type: Instr

def

= |

Grant : ∀ (rt : ResT ype), validP ermRes rt → M ulV alid → Instr

|

Consume : ∀ (rt : ResT ype), validP ermRes rt → Instr

|

Call : Instr

|

Return : Instr

|

T hrow : EX → Instr

(2.75)

where M ulV alid is the type of valid multiplicities, that is, different from the multiplicity ⊥. The definition of the operational semantics of programs strongly depends on those of the permission granting and consumption mechanisms. They are briefly discussed and described in what follows. In [29] two variants are discussed concerning the effect of the update operation after a permission has been granted: either the permissions before the update instruction are

46

Formal analysis of security models for mobile devices

discarded or they are accumulated. At a first sight these permission granting policies have advantages and drawbacks. Furthermore, independently of this particular discussion, it is at this point that the permission model proposed by the authors introduces a generalization with respect to that of MIDP: the multiplicity of a permission. One of the main objectives of the work presented here has been to design a framework that would make it possible to provide a uniform setting where those different permissions models could be formally defined and compared. To that end, the constructions defined to provide semantics to the computational behaviour of the programs as well as to reason over that behaviour have been parameterized by permission granting policies. One such parameter shall be formally represented by an object of the following type: grantP olicy

def

=

∀ (rt : ResT ype),

(2.76)

validP ermRes rt → N ZM ulV alid → P erm → P erm where an object of type N ZM ulV alid is a valid multiplicity constructed with a non-zero natural. As to the consumption of permissions, the following is the definition of the consume operation: consume

def

(rt : ResT ype)(pr : validP ermRes rt)(p : P erm) : P erm = if (pr ⊑P ermRes (p rt).permRes) then update p rt (p rt).permRes (predM ul (p rt).mul)

(2.77)

else update p rt (invalid rt) (predM ul (p rt).mul) The consume operation is monotonic on permissions. This is stated (and proved) in the following lemma: Lemma 2.5.3. ∀ (rt : ResT ype)(pr : validP ermRes rt)(p p′ : P erm), p ⊑P erm p′ → (consume rt pr p) ⊑P erm (consume rt pr p′ )

(2.78)

Following [29] the small-step operational semantics of a control-flow graph has been defined basically as a relation that defines transitions between states consisting of a standard control-flow stack of nodes enriched with the permissions held at that point in the execution. This definition has been extended by making it depend on a permission granting policy g. Formally, it has been defined as an inductive propositional function g whose rules are depicted in Figure 2.7. An important property of this semantics is that it is non-intrusive, that is to say, the permission state does not interfere with execution. In other words, a transition will not be blocked by the absence of permissions. This is formally stated, and proved, in the following lemma: Lemma 2.5.4. ∀ (g : grantP olicy)(s s′ : list N O)(ex ex′ : option EX)(p p′ : P erm), ′ ′ ′ ′ ′ ′ ′ s ex p g s ex p → ∀ (p : P erm), (∃ (p : P erm), s ex p g s ex p )

(2.79)

2.5 A framework for defining and comparing access control policies

47

KD n = Grant rt pr m T G n n′ ′ (n ⊲ s) N one p g (n ⊲ s) N one (g rt pr m p) KD n = Consume rt pr T G n n′ ′ (n ⊲ s) N one p g (n ⊲ s) N one (consume rt pr p) KD n = Call CG n n′ ′ (n ⊲ s) N one p g (n ⊲ n ⊲ s) N one p KD n = T hrow ex EG ex n h (n ⊲ s) N one p g (h ⊲ s) N one p ∀ (h : N O), ¬EG ex n h (t ⊲ n ⊲ s) (Some ex) p g (n ⊲ s) (Some ex) p

KD r = Return T G n n′ ′ (r ⊲ n ⊲ s) N one p g (n ⊲ s) N one p KD n = T hrow ex (n ⊲ s) N one p

∀ (h : N O), ¬EG ex n h g (n ⊲ s) (Some ex) p

EG ex n h (t ⊲ n ⊲ s) (Some ex) p g (h ⊲ s) N one p

Figure 2.7: Semantics of instructions Traces In [29] global results on the execution of programs are expressed on traces, which in turn are defined in terms of the operational semantics described above (instantiated for a particular grant policy) as follows: a partial trace of a control-flow graph is a sequence (of type snocList (N O, option EX)) of nodes [] ⊳ hn0 , N onei ⊳ hn1 , e1 i ⊳ · · · ⊳ hnk , ek i

(2.80)

such that for all 0 ≤ i < k there exists ρ, ρ′ ǫ P erm, s, s′ ǫ (list N O) and verifying ni ⊲ s, ei , ρ ni+1 ⊲ s′ , ei+1 , ρ′ . The stacks s and s′ in the above definition are existentially quantified because they are not defined to be components of the elements of a trace. This quantification, however, induces a loss of information w.r.t. the operational semantics. An example should clarify this situation. Consider the control-flow graph: N O = {A, B, C, D} EX = {} KD = {(A, Return), (B, x), (C, Consume rt y), (D, Return)} T G = {(B, C), (C, D)} CG = {} EG = {} n0 = A where x : Instr, rt : ResT ype, y : validP ermRes rt, and with initial permission pinit = λ (rt : ResT ype) . h(valid rt [] []), (val 0)i. Figure 2.8 depicts the control-flow graph in question. From this definition it can be noticed that [] ⊳ hA, N onei is the only admissible trace yielding a valid permission state. According to the definition of partial trace stated above, the object ([] ⊳ hA, N onei ⊳ hC, N onei ⊳ hD, N onei) is admitted as a partial trace of the defined control-flow graph. This trace can be built using the transition rules for the Consume and Return instructions (see Figure 2.7). However, this latter trace yields an error situation, because the transition from node C to node D attempts to consume an unavailable permission.

48

Formal analysis of security models for mobile devices 



 

    



 

Figure 2.8: Control-flow graph example The definition of program execution traces that are proposed in the framework presented here remedies the situation described above by including the node stack as a component of the elements of the trace. This is formally represented by the following type: def

T race = snocList [[ noT : N O, stT : list N O, exT : option EX ]]

(2.81)

The notion of parameterized partial trace is then inductively defined over elements of type T race as follows: P T raceg []

P T raceg ([] ⊳ hn0 , [], N onei)

P T raceg (tr ⊳ hn, s, exi) ∃ (p p′ : P erm), n ⊲ s ex p P T raceg (tr ⊳ hn, s, exi ⊳ hn′ , s′ , ex′ i)

g

n′ ⊲ s′ ex′ p′

(2.82) (2.83)

Let tr be a trace and g be a grant policy, if P T raceg tr holds then it shall be said that tr is a valid trace according to g. Given a trace tr and a grant policy g, the function P ermsOfg : P erm → T race → P erm

(2.84)

computes the permission state resulting from the execution of the program that tr represents: def

P ermsOfg (pinit : P erm)(tr : T race) : P erm = match tr with | [] ⇒ pinit | tr′ ⊳ e ⇒ match KD e.noT with | Consume rt pr ⇒ consume rt pr (P ermsOfg pinit tr′ ) | Grant rt pr m ⇒ g rt pr m (P ermsOfg pinit tr′ ) | ⇒ P ermsOfg pinit tr′ end end

(2.85)

Finally, given a grant policy g, a trace is said to be safe if none of its prefixes yields a faulty permission state: def

Saf eg (tr : T race)(pinit : P erm) = ∀ tr′ : T race, (pref ix tr′ tr) → ¬Error(P ermsOfg pinit tr′ )

(2.86)

2.5 A framework for defining and comparing access control policies

2.5.3

49

Permission grant policies

Two kinds of grant policies are analysed in [29]: given a resource type rt, one of the policies establishes that when a new permission is granted to resources of rt, all previous granted permissions are overwritten. This policy is called here grantow . The other policy, called here grantac , establishes that new granted permissions for rt are accumulated with the ones previously obtained for that resource type. These policies are formally defined as follows: def

grantow : grantP olicy = λ (p : P erm) (rt : ResT ype) (pr : P ermRes rt) (m : M ul) . update p rt pr m

(2.87)

def

grantac : grantP olicy = λ (p : P erm) (rt : ResT ype) (pr : P ermRes rt) (m : M ul) . update p rt (pr ⊔P ermRes (p rt).permRes) (m +mul (p rt).mul)

(2.88)

The permission modes defined by MIDP are also defined below as grant policies. The grantbk term represents the blanket permission mode, which specifies unrestricted access to a given resource type. The one-shot permission mode, which specifies a single access to a given resource type, is represented by the term grantos . def

grantbk : grantP olicy = λ (p : P erm) (rt : ResT ype) (pr : P ermRes rt) (m : M ul) . update p rt (pr ⊔P ermRes (p rt).permRes) ∞

(2.89)

def

grantos : grantP olicy = λ (p : P erm) (rt : ResT ype) (pr : P ermRes rt) (m : M ul) . update p rt pr 1

(2.90)

It should be noticed that both the allowed mode and the session permission mode specified by MIDP 2.0 can be modeled as a blanket grant policy. In the first case, the granted permission would hold for the rest of the life cycle of the application to which is granted the permission and, in the second case, the scope would be that of a session during which that application is active. In order to perform a comparative analysis of grant policies of the kind of the ones just defined, the following relation is defined: def

g1 ⊑g g 2 =

∀ (tr : T race)(p : P erm), P traceg1 tr → Saf eg1 p tr → Saf eg2 p tr

(2.91)

This order establishes that given a control-flow graph, for every valid trace of the graph according to g1 and every initial set of permissions it holds that if the trace is safe by granting the permissions using g1 as policy, then it must also be safe if the permissions are granted using the policy g2 . Intuitively, g1 yields a more restrictive permission model. The following lemma states that the order relation between permission states preserves error situations. It can also be proved that ⊑g is a partial order (reflexive, transitive and antisymmetric). These results shall be of help when relating the grant policies described so far.

50

Formal analysis of security models for mobile devices

Lemma 2.5.5. ∀ (p1 p2 : P erm), Error p1 → p1 ⊑P erm p2 → Error p2

(2.92)

The relation ⊑g defines a lattice structure with the policies grantos and grantbk as the bottom and top elements respectively. The following theorem states a sufficient condition (a criterion) to prove that two permission granting policies, say g1 and g2 , are in the order relation (g1 ⊑g g2 ): 1. the error situations that arise using g2 as a policy are also error situations if g1 is used, and 2. if a grant policy g1 is applied, then every permission available at the end of a trace is also available if g2 is used instead of g1 . This theorem is important in order to compare different security policies. Theorem 2.5.6. ∀ (g1 g2 : grantP olicy) (Herrors : ∀ (rt : ResT ype) (pr : validP ermRes rt) (m : N ZM ulV alid) (p : P erm), Error(g2 rt pr m p) → Error(g1 rt pr m p)) (Hperms : ∀ (p : P erm) (tr : T race), (P ermsOfg1 p tr) ⊑P erm (P ermsOfg2 p tr)), g1 ⊑g g2

(2.93)

Proof. The proof proceeds by induction over (P T raceg1 tr), which is obtained after unfolding g1 ⊑g g2 . If the trace tr is empty, then the theorem holds trivially. In the case the trace is a singleton node, the proof uses hypothesis Herrors and proceeds by doing case analysis on the instruction type associated with that node; the interesting case corresponds to the Grant instruction, since consume is monotonic w.r.t. ⊑P erm and the rest of the instructions do not affect the permission state. The inductive step follows basically from the Lemma 2.5.5, the hypothesis Hperms , and the induction hypothesis.

2.5.4

Relating permission grant policies

Using the formal setting defined so far it is now possible to state and prove a theorem that establishes how the four policies described in the previous section are related according to the order relation ⊑g . Theorem 2.5.7. grantos ⊑g grantow ⊑g grantac ⊑g grantbk

(2.94)

Proof. The proof proceeds first by proving the three inequalities grantos ⊑g grantow , grantow ⊑g grantac and grantac ⊑g grantbk , and then applying the transitivity of the order ⊑g . Each inequality is proved applying the theorem that establishes the sufficient conditions to prove that two grant policies are in the order relation (Theorem 2.5.6), and following a similar strategy. Here it shall be presented in detail the proof of the first inequality, indications on how to proceed for the remaining two cases shall also be provided.

2.5 A framework for defining and comparing access control policies

51

The application of the Theorem 2.5.6 to prove grantos ⊑g grantow generates in turn the following proof obligations: ∀ (rt : ResT ype) (pr : validP ermRes rt) (m : N ZM ulV alid) (p : P erm), Error(g2 rt pr m p) → Error(g1 rt pr m p))

(2.95)

∀ (p : P erm) (tr : T race), (P ermsOfg1 p tr) ⊑P erm (P ermsOfg2 p tr))

(2.96)

The proof of (2.95) proceeds by first applying the Lemma 2.5.5. This leads to have to prove that (grantos rt pr m p) ⊑P erm (grantow rt pr m p). Unfolding the definition of grantow and grantos , and applying the lemmas that characterize the function update, we have to prove hpr, 1i ⊑P ermM ul hpr, mi and (p rt) ⊑P ermM ul (p rt). The latter follows directly because ⊑P ermM ul is reflexive. As to the former, as m : N ZM ulV alid so the least number it can be is 1, in which case, since ⊑P ermM ul is reflexive, the obligation is discharged. For (2.96), the proof proceeds by induction on tr: • tr = [], the inequality simplifies to p ⊑P erm p and since ⊑P erm is reflexive, this obligation is discharged. • tr = tr′ ⊳ hn, st, exi, the proof proceeds by case analysis on KD n. The relevant cases are Grant and Consume, since the rest of the instructions do not affect the permission state. The Consume case is straightforward since the function consume is monotonic, and by induction hypothesis it is known that: (P ermsOfgrantos p tr′ ) ⊑P erm (P ermsOfgrantow p tr′ )

(2.97)

The Grant case is proved using transitivity of ⊑P erm , the induction hypothesis and the following two lemmas: – ∀ (rt : ResT ype)(pr : validP ermRes rt)(m : N ZM ulV alid)(p : P erm), (grantos rt pr m p) ⊑P erm (grantow rt pr m p) – ∀ (rt : ResT ype)(pr : validP ermRes rt)(m : N ZM ulV alid)(p p′ : P erm), p ⊑P erm p′ → (grantow rt pr m p) ⊑P erm (grantow rt pr m p′ ). (2.98) The structure of the proof of the two remaining inequalities are quite similar to the one just described above. In both cases the bulk of the proof reduces to proving auxiliary lemmas similar to the ones of the proof obligation (2.96) for the involved grant policies. This theorem and its proof provide a formal evidence that, in the first place, of the four policies, MIDP’s one-shot is the most restrictive policy and MIDP’s blanket is the most permissive one. In addition to that, these two policies have been formally related with the permission grant policies defined in [29]. Furthermore, the theorem also formally relates these two latter granting policies, showing that the accumulative one is more permissive than the overwriting one. The difference between accumulating permissions and overwriting permissions is subtle. The problem with accumulating permissions is that at any program point to approximate the permissions available for a given resource type it has to be considered all

52

Formal analysis of security models for mobile devices

the consumptions and all the permissions granted for that resource type. Whereas in the overwriting grant policy it is enough to consider the last grant operation and the subsequent consume operations. This suggests that a static permission analysis might be simpler using the overwriting grant policy.

2.6

Related work

This work builds upon and extends a number of previously published papers [65, 66, 67, 68]. In what follows we discuss work related with the results that have been presented in the previous sections. Some effort has been put into the evaluation of the security model for MIDP 2.0; Kolsi and Virtanen [79] and Debbabi et al. [80] analyze the application security model, spot vulnerabilities in various implementations and suggest improvements to the specification. Although these works report on the detection of security holes, they do not intend to prove their absence. The formalizations we overview in this article, however, provide a formal basis for the verification of the model and the understanding of its intricacies. Various articles analyze access control in Java and C♯, see for instance [78, 77, 81, 76, 82]. All these works have mainly focused on the stack inspection mechanism and the notion of granting permissions to code through privileged method calls. The access control check procedures in MIDP do not involve a stack walk. While in the case of the JSE platform there exists a high-level specification of the access controller module (the basic mechanism is based on stack inspection [28]), no equivalent specification exists for JME. In this work, we illustrate the pertinence of the specification of the MIDP (version 2.0) security model that has been developed by specifying and proving the correctness of an access control module for the JME platform. Besson, Duffay and Jensen [29, 83] have put forward an alternative access control model for mobile devices that generalizes the MIDP model by introducing permissions with multiplicites, extending the way permissions are granted by users and consumed by applications. One of the main outcomes of the work we report in the present paper is a general framework that sets up the basis for defining, analyzing and comparing access control models for mobile devices. In particular, this framework subsumes the security model of MIDP and several variations, including the model defined in [29, 83]. Android [25] is an open platform for mobile devices developed by the Open Handset Alliance led by Google, Inc. Focusing on security, Android combines two levels, Linux system and application framework level, of enforcement [26, 27]. At the Linux system level, Android is a multi-process system. The Android security model resembles a multiuser server, rather than the sandbox model found on JME platform. At the application framework level, Android provides fine control through Inter-Component Communication reference monitor, that provides Mandatory Access Control enforcement on how applications access the components. There have been several analysis done on the security of the Android system, but few of them pay attention to the formal aspect of the permission enforcing framework. In [84, 85], the authors propose an entity-relationship model for the Android permission scheme [86]. This work —the first formalization of the permission scheme which is enforced by the Android framework— builds a state-based formal model and provides a behavioral specification, based in turn on the specification developed in [65] and described in Section 2.3. The abstract operation set considered in [84, 85] does not include permission request/revoke operations presents in MIDP. In

2.7 Summary [87], Chaudhuri presents a core language to describe Android applications, and to reason about their dataflow security properties. The paper introduces a type system for security in this language. The system exploits the access control mechanisms already provided by Android. Furthermore, [26] reports a logic-based tool, Kirin, for determining whether the permissions declared by an application satisfy a certain safety invariant. A formal comparison between both JME-MIDP and Android security models is an interesting further work. However, we develop a first informal analysis in the technical report [88], which also details the Android security model and analyzes some of the more recent works that formalize different aspects of this model. Language-based access control has been investigated for some idealised program models, see e.g. [89, 90, 91]. These works make use of static analysis for verifying that resources are accessed according to access control policies specified and that no security violations will occur at run-time. They do not study, though, specific language primitives for implementing particular access control models.

2.7

Summary

The Java Micro Edition platform defines the Mobile Information Device Profile (MIDP) to facilitate the development of applications for mobile devices. In this chapter, we have studied and compared formally several variants of the security model specified by MIDP to access sensitive resources of a device. Our contributions are manyfold. First, we have described a formal specification of the MIDP 2.0 security model that has been developed using proof assistant Coq. This formalization provides an abstraction of the state of a device and security-related events that allows to reason about the security properties of the platform where the model is deployed. Then, we have illustrated the pertinence of this formalization by specifying and proving the correctness of an access control module. The latest version of MIDP (3.0), introduces a new dimension in the security model at the application level. We have extended the formal specification developed for MIDP 2.0 to incorporate the changes introduced in MIDP 3.0, and we have showed that this extension is conservative, in the sense that it preserves the security properties we proved for the previous model. We have spotted and discuss here some weaknesses introduced by the security mechanisms of version 3.0. In [29, 83] it has been proposed an alternative access control model for mobile devices that generalizes the MIDP model by introducing permissions with multiplicities, extending the way permissions are granted and consumed. Finally, we have presented a general framework in the Coq proof assistant that allows one to define and compare access control policies that can be enforced by (variants of) this generalized security model.

53

Chapter 3

Formally verifying security properties in an idealized model of virtualization 3.1

Introduction

Virtualization is a prominent technology that allows high-integrity, safety-critical, systems and untrusted, non-critical, systems to coexist securely on the same platform and efficiently share its resources. To achieve the strong security guarantees requested by these application scenarios, virtualization platforms impose a strict control on the interactions between their guest systems. While this control theoretically guarantees isolation between guest systems, implementation errors and side-channels often lead to breaches of confidentiality, allowing a malicious guest system to obtain secret information, such as a cryptographic key, about another guest system. Over the last few years, there have been significant efforts to prove that virtualization platforms deliver the expected, strong, isolation properties between operating systems. The most prominent efforts in this direction are within the Hyper-V [14, 15] and L4.verified [16] projects, which aim to derive strong guarantees for concrete implementations: more specifically, Murray et al [33] recently presented a machine-checked information flow security proof for the seL4 microkernel. In comparison with the HyperV and L4.verified projects, our proofs are based on an axiomatization of the semantics of a hypervisor, and abstract away many details from the implementation; on the other hand, our model integrates caches and Translation Lookaside Buffers (TLBs), two security relevant components that are not considered in these works. There are three important isolation properties for virtualization platforms. On the one hand, read and write isolation respectively state that guest operating systems cannot read and write on memory that they do not own. On the other hand, OS isolation states that the behavior of a guest operating system does not depend on the previous behavior and does not affect the future behavior of other operating systems. In contrast to read and write isolation, which are safety properties and can be proved with deductive verification methods, OS isolation is a 2-safety property [17, 18] that requires reasoning about two program executions. Unfortunately, the technology for verifying 2-safety properties is not fully mature, making their formal verification on large and complex programs exceedingly challenging. 55

56

Formally verifying security properties in an idealized model of virtualization

3.1.1

A primer on virtualization

This section provides a primer on virtualization, focusing on the elements that are most relevant for our formal model, which we develop in Section 3.2. Virtualization is a technique used to run on the same physical machine multiple operating systems, called guest operating systems. The hypervisor, or Virtual Machine Monitor [92], is a thin layer of software that manages the shared resources (e.g. CPU, system memory, I/O devices). It allows guest operating systems to access these resources by providing them an abstraction of the physical machine on which they run. One of the most important features of a virtualization platform is that its OSs run isolated from one another. In order to guarantee isolation and to keep control of the platform, a hypervisor makes use of the different execution modes of a modern CPU: the hypervisor itself and trusted guest OSs run in supervisor mode, in which all CPU instructions are available; while untrusted guest operating systems will run in user mode in which privileged instructions cannot be executed. Historically there have been two different styles of virtualization: full virtualization and paravirtualization. In the first one, each virtual machine is an exact duplicate of the underlying hardware, making it possible to run unmodified operating systems on top of it. When an attempt to execute a privileged instruction by the OS is detected the hardware raises a trap that is captured by the hypervisor and then it emulates the instruction behavior. In the paravirtualization approach, each virtual machine is a simplified version of the physical architecture. The guest (untrusted) operating systems must then be modified to run in user CPU mode, changing privileged instructions to hypercalls, i.e. calls to the hypervisor. A hypercall interface allows OSs to perform a synchronous software trap into the hypervisor to perform a privileged operation, analogous to the use of system calls in conventional operating systems. An example use of a hypercall is to request a set of page table updates, in which the hypervisor validates and applies a list of updates, returning control to the calling OS when this is completed. In this work, we focus on the memory management policy of a paravirtualization style hypervisor, based on the Xen virtualization platform [93]. Several features of the platform are not yet modeled (e.g. I/O devices, interruption system, or the possibility to execute on multi-cores), and are left as future work.

3.1.2

Contents of the chapter

This Chapter builds upon and extends the previously published papers [94, 95]. In Section 3.2 we develop our formal model. Isolation properties are considered in Section 3.3, whereas availability is discussed in Section 3.4. Section 3.5 presents an extension of the memory model with cache and TLB. Section 3.6 describes the executable semantics of the hypervisor and outlines the proof of correctness of the implementation. In Section 3.7 we present the isolation theorems for the extended model with execution errors and a proof of transparency. Section 3.8 considers related work, and finally Section 3.9 concludes with a summary of the chapter.

3.1.3

Formalization

The formal development is available at http://www.fing.edu.uy/inco/grupos/gsi/proyectos/virtualcert.php

3.2 The model

57

and can be verified using the Coq proof assistant.

3.2

The model

In this section we present and discuss the formal specification of the idealized model. We first introduce the set of states, and the set of actions; the latter include both operations of the hypervisor and of the guest operating systems. The semantics of each action is specified by a precondition and a postcondition. Then, we introduce a notion of valid state and show that state validity is preserved by execution. Finally, we define execution traces.

3.2.1

Informal overview of the memory model

The most important component of the state is the memory model, which we proceed to describe. As illustrated in Figure 3.1, the memory model involves three types of addressing modes and two address mappings: the machine address is the real machine memory; the physical memory is used by the guest OS, and the virtual memory is used by the applications running on an operating system. The virtual memory is the one used 

            

!   

       

    

                   

  



   



Figure 3.1: Memory model of the platform by applications running on OSs. Each OS stores a partial mapping of virtual addresses to machine addresses. This will allow us to represent the translation of the virtual addresses of the applications executing in the OS into real hardware addresses. Moreover, each OS has a designated portion of its virtual address space (usually abbreviated VAS) that is reserved for the hypervisor to attend hypercalls. We say that a virtual address va is accessible by the OS if it belongs to the virtual address space of the OS which is not reserved for the hypervisor. We denote the type of virtual addresses by vadd. The physical memory is the one addressed by the kernel of the guest OS. In the Xen [93] platform, this is the type of addresses that the hypervisor exposes to the domains (the untrusted guest OSs in our model). The type of physical addresses is written padd.

58

Formally verifying security properties in an idealized model of virtualization

The machine memory is the real machine memory. A mechanism of page classification was introduced in order to cover concepts from certain virtualization platforms, in particular Xen. The model considers that each machine address that appears in a memory mapping corresponds to a memory page. Each page has at most one unique owner, a particular OS or the hypervisor, and is classified either as a data page with read/write access or as a page table, where the mappings between virtual and machine addresses reside. It is required to register (and classify) a page before being able to use or map it. The type of machine addresses is written madd. As to the mappings, each OS has an associated collection of page tables (one for each application executing on the OS) that map virtual addresses into machine addresses. When executed, the applications use virtual addresses, therefore on context switch the current page table of the OS must change so that the currently executing application may be able to refer to its own address space. Neither applications nor untrusted OSs have permission to read or write page tables, because these actions can only be performed in supervisor mode. Every memory address accessed by an OS needs to be associated to a virtual address. The model must guarantee the correctness of those mappings, namely, that every machine address mapped in a page table of an OS is owned by it. The mapping that associates, for each OS, machine addresses to physical ones is, in our model, maintained by the hypervisor. This mapping might be treated differently by each specific virtualization platform. There are platforms in which this mapping is public and the OS is allowed to manage machine addresses. The physical-to-machine address mapping is modified by the actions page pin and page unpin, as shall be described in Section 3.2.3.

3.2.2

Formalizing states

The platform state consists of a collection of components that we now proceed to describe. Operating systems We start from a type os ident of identifiers for guest operating systems, and a predicate trusted os indicating whether a guest operating system is trusted. The state contains information about each guest OS current page table, which is a physical address, and information on whether it has a hypercall pending to be resolved. Formally the information is captured by a mapping oss map that associates OS identifiers with objects of type os, where def

os = [[ curr page : padd, hcall : option Hyper call ]]

(3.1)

def

oss map = os ident 7→ os Execution modes Most hardware architectures distinguish at least two execution modes, namely user mode (usr) and supervisor mode (svc). These modes are used as a protection mechanism, where privileged instructions are only allowed to be executed in supervisor mode. In our model, untrusted OSs execute in user mode while trusted ones and the hypervisor execute in supervisor mode. When an untrusted OS needs to execute a privileged operation, it

3.2 The model

59

requests the hypervisor to do it on its behalf. Execution modes are formalized by the enumerated type exec mode, where def

exec mode = usr | svc

(3.2)

Moreover, there is a single active OS in each state. After requesting the hypervisor to execute some service, the active guest OS will turn in processor execution mode waiting until the service is completed and the execution control returned, switching then its execution mode to running. Active OS execution mode is formalized by the type def

os activity = running | waiting

(3.3)

Memory mappings The mapping, that given an OS returns the corresponding mapping from physical to machine addresses, is formalized as an object of the type hypervisor map, where def

hypervisor map = os ident 7→ (padd 7→ madd)

(3.4)

The real platform memory is formalized as a mapping that associates to a machine address a page, thus def

system memory = madd 7→ page

(3.5)

A page consists of a page content and a reference to the page owner. Page contents can be either (readable/writable) values, an OS page table or nothing; note that a page might have been created without having been initialized, hence the use of option types. Page owners can be the hypervisor, a guest OS or none. Formally: def

content = RW (v : option V alue) | P T (va to ma : vadd 7→ madd) | Other def

page owner = Hyp | Os (osi : os ident) | N o Owner

(3.6)

def

page = [[ page content : content, page owned by : page owner ]]

States The states of the platform are modeled by a record with six components: def

State = [[ active os aos exec mode aos activity oss hypervisor memory

: os ident, : exec mode, : os activity, : oss map, : hypervisor map, : system memory ]]

(3.7)

60

Formally verifying security properties in an idealized model of virtualization def

P re s (read va) = os accessible(va) ∧ s.aos activity = running ∧ ∃ ma : madd, va mapped to ma(s, va, ma) ∧ is RW ((s.memory[ma]).page content) def

P ost s (read va) s′ = s = s′ def

P re s (write va val) = P re s (read va) def

P ost s (write va val) s′ = ∃ ma : madd, va mapped to ma(s, va, ma) ∧ s′ = s · [memory := (s.memory[ma := hRW (Some val), s.active osi])] def

P re s (chmod) = s.aos activity = waiting ∧ (s.oss[s.active os]).hcall = N one def

P ost s (chmod) s′ = trusted os(s.active os) ∧  aos exec mode ′ s =s· aos activity ¬ trusted os(s.active os) ∧  aos exec mode ′ s =s· aos activity

:= :=

svc, running



:= :=

usr, running





def

P re s (page pin untr o pa t) = ¬ trusted os(o) ∧ s.aos activity = waiting ∧ (s.oss[o]).hcall = Some (Hyperv call pin(pa, t)) ∧ physical address not allocated(s.hypervisor[o], pa) ∧ ∃ ma : madd, memory available(s.memory, ma) def

P ost s (page pin untr o pa t) s′ = ∃ ma : madd, memory available(s.memory, ma) ∧   oss := s.oss[o := hN one, (s.oss[o]).curr pagei],  s′ = s ·  hypervisor := s.hypervisor[o, pa := ma], memory := s.memory[ma := newpage(t, o)]

Figure 3.2: Formal specification of actions semantics The component active os indicates which is the active operating system, and the components aos exec mode and aos activity the corresponding execution and processor mode. oss stores the information of the guest operating systems of the platform. Finally, the components hypervisor and memory are the mappings used to formalize the memory model described in the previous section. We define the predicate os accessible(va), that holds if va belongs to the set of virtual addresses accessible by any OS. Valid state We define a notion of valid state that captures essential properties of the platform. Formally, the predicate valid state holds on state s if s satisfies the following properties: • a trusted OS has no pending hypercalls; • if the active OS is in running mode then no hypercall requested by it is pending;

3.2 The model

61

• if the hypervisor or a trusted OS is running the processor must be in supervisor mode; • if an untrusted OS is running the processor must be in user mode; • the hypervisor maps an OS physical address to a machine address owned by that same OS. This mapping is also injective; • all page tables of an OS o map accessible virtual addresses to pages owned by o and not accessible ones to pages owned by the hypervisor; • the current page table of any OS is owned by that OS; • any machine address ma which is associated to a virtual address in a page table has a corresponding pre-image, which is a physical address, in the hypervisor mapping. All properties have a straightforward interpretation in our model. For example, the first property is captured by the proposition: ∀ osi : os ident, trusted os(osi) → (s.oss[osi]).hcall = N one

(3.8)

Valid states are invariant under execution, as shall be shown later.

3.2.3

Actions

Table 3.1 summarises the set of the actions specified in the model, and their effects. Actions can be classified as follows: • hypervisor calls new, delete, pin, unpin and lswitch; • change of the active OS by the hypervisor (switch); • access, from an OS or the hypervisor, to memory pages (read and write); • update of page tables by the hypervisor on demand of an untrusted OS or by a trusted OS directly (new and delete); • changes of the execution mode (chmod, ret ctrl); • changes in the hypervisor memory mapping (pin and unpin), which are performed by the hypervisor on demand of an untrusted OS or by a trusted OS directly. These actions model (de)allocation of resources. Actions Semantics The behaviour of actions is specified by a precondition P re and by a postcondition P ost of respective types: P re : State → Action → P rop P ost : State → Action → State → P rop

(3.9)

Figure 3.2 provides the axiomatic semantics of some relevant actions, namely, read, write, chmod and page pin untr (the names of the auxiliary predicates used should be self-explanatory). Notice that what is specified is the effect the execution of an action

62

Formally verifying security properties in an idealized model of virtualization

read va read hyper va write va val write hyper va val new tr va pa

new untr o va pa

new hyper va ma

del tr va del untr o va del hyper va

switch o lswitch tr pa

lswitch untr o pa silent hcall c ret ctrl chmod

page pin tr pa t page pin untr o pa t page unpin tr pa page unpin untr o pa

A guest OS reads virtual address va. The hypervisor reads virtual address va. A guest OS writes value val in virtual address va. The hypervisor writes value val in virtual address va. The virtual address va is mapped to the machine address ma in the memory mapping of the trusted active OS, where pa translates to ma for the active OS. The hypervisor adds (on behalf of the OS o) a new ordered pair (mapping virtual address va to the machine address ma) to the current memory mapping of the untrusted OS o, where pa translates to ma for o. The hypervisor adds a new ordered pair to the current memory mapping of the active OS (mapping virtual address va to the machine address ma) for his own purposes. The trusted active OS deletes the ordered pair that maps virtual address va from its memory mapping. The hypervisor deletes (on behalf of the o OS) the ordered pair that maps virtual address va from the current memory mapping of o. The hypervisor deletes (for its own purposes) the ordered pair that maps virtual address va from the current memory mapping of the active OS. The hypervisor sets o to be the active OS. The trusted active OS changes its current memory mapping to be the one located at physical address pa. This action corresponds to a traditional context switch by the active OS. The hypervisor changes the current memory mapping of the untrusted active OS, to be the one located at physical address pa. Represents the silent action (the system does not advertise any effects). An untrusted OS requires privileged service c to be executed by the hypervisor. Returns the execution control to the hypervisor. The hypervisor changes the execution mode from supervisor to user mode, if the active OS is untrusted, and gives to it the execution control. The memory page that corresponds to physical address pa (for the active OS) is registered and classified with type t. The memory page that corresponds to physical address pa (for untrusted OS o) is registered and classified with type t. The memory page that corresponds to physical address pa (for the active OS) is un-registered. The memory page that corresponds to physical address pa (for the untrusted OS o) is un-registered.

Table 3.1: Platform actions

has on the state of the platform. In particular, the action read does not return the accessed value. The precondition of the action read va requires that va is accessible by the active OS, that there exists a machine address ma to which va is mapped, that the active OS is running and that the page indexed by the machine address ma is readable/writable. The postcondition requires the execution of this action to keep the state unchanged.

3.2 The model

63

The precondition of the action write is identical to that of the action read. The postcondition establishes that the state after the execution of the action only differs in the value (val) of the page associated to ma, which is owned by the active OS. The precondition of the action chmod requires that there must not be a pending hypercall for the active OS. The postcondition establishes that after the execution of the action, if the active OS is a trusted one, then the effect on the state is to change its execution mode to supervisor mode. Otherwise, the execution mode is set to user mode. In both cases, the processor mode is set to running. The execution of the action page pin untr requires, in the first place, that the hypervisor is running and that the active OS is untrusted. In addition to that, the OS o must be waiting for an hypercall to pin the physical address pa of type t, pa must not be already allocated and there must be machine memory available. The effect of the action is to create and allocate at machine address ma a new page of type t whose owner is the OS o and bind, in the hypervisor mapping, the physical address pa to ma. The rest of the state remains unchanged. One-step execution The execution of an action is specified by the ֒→ relation: valid state(s)

P re s a a ′ s ֒− →s

P ost s a s′ (3.10)

Whenever an action occurs for which the precondition holds, the (valid) state may change a ′ in such a way that the action postcondition is established. The notation s ֒− → s may be read as the execution of the action a in a valid state s results in a new state s′ . Note that this definition of execution does not consider the cases where the preconditions of the actions are not fulfilled. Invariance of valid state One-step execution preserves valid states, that is to say, the state resulting from the execution of an action is also a valid one. Lemma 3.2.1. ′ a ′ ∀ (s s′ : State) (a : Action), s ֒− → s → valid state(s )

(3.11)

Platform state invariants, such as state validity, are useful to analyze other relevant properties of the model. In particular, the results presented in this work are obtained from valid states of the platform.

3.2.4

Traces

Isolation properties are eventually expressed on execution traces, rather than execution steps; likewise, availability properties are formalized as fairness properties stating that something good will eventually happen in an execution traces. Thus, our formalization includes a definition of execution traces and proof principles to reason about them.

64

Formally verifying security properties in an idealized model of virtualization

Informally, an execution trace is defined as a stream (an infinite list) of states that are related by the transition relation ֒→, i.e. an object of the form a0 s a1 s a2 s . . . s0 ֒− → 1 ֒−→ 2 ֒−→ 3

(3.12)

ai s such that every execution step si ֒− → i+1 is valid. Formally, an execution trace is defined a[i] as a stream t of pairs of states and actions, such that for every i ≥ 0, s[i] ֒− −→ s[i + 1], where t[i] = hs[i], a[i]i and t[i + 1] = hs[i + 1], a[i + 1]i. We let T race define the type of traces. State properties are lifted to properties on pairs of states and actions in the obvious way. Moreover, state properties can be lifted to properties on traces; formally, each predicate P on states can be lifted to predicates  P (read always P ) and ♦ P (read eventually P ) in a temporal logic as Computation Tree Logic (CTL) [96, 97], formalized in the CIC (in Coq) [98]. The former  P is defined co-inductively defined by the clause (P, s :: t) iff P (s) and (P, t), whereas the latter ♦ P is defined inductively by the clauses ♦(P, s :: t) iff P (s) or ♦(P, t); each modality has an associated reasoning principle attached to its definition. Similar modalities can be defined for relations, and can be used to express isolation properties. In particular, given a relation R on states, and two traces t1 and t2 , we have (R, t1 , t2 ) iff R(t1 [i], t2 [i]) for all i.

3.3

Isolation properties

We formally establish that the hypervisor enforces strong isolation properties: an operating system can only read and modify memory that it owns, and its behavior is independent of the state of other operating systems. The properties are established for a single step of execution, and then extended to traces. Read isolation Read isolation captures the intuition that no OS can read memory that does not belong to it. Formally, read isolation states that the execution of a read va action requires that va is mapped to a machine address ma that belongs to the active OS current memory mapping, and that is owned by the active OS. Lemma 3.3.1. ∀ (s s′ : State) (va : vadd), read va s′ → ∃ ma : madd, va mapped to ma(s, va, ma) ∧ s ֒− −−−−→ ∃ pg : page, pg = s.memory[ma] ∧ pg.page owned by = s.active os

(3.13)

The property is proved by inspection of the pre and postcondition for the read action, using the definition of valid state. Write Isolation Write isolation captures the intuition that an OS cannot modify memory that it does not own. Formally, write isolation states that, unless the hypervisor is running, the execution of any action will at most modify memory pages owned by the active OS or it will allocate a new page for that OS.

3.3 Isolation properties

65

Lemma 3.3.2. ∀ (s s′ : State) (a : Action) (ma : madd), a ′ s ֒− → s → ¬hyper running(s) → s′ .memory[ma] = s.memory[ma] ∨ owner or f ree(s.memory, ma, s.active os) (3.14) where hyper running and owner or f ree respectively denote that the hypervisor is running, and that the owner of the given machine address is either the given OS or it is free. The property is proved by case analysis on the action executed. The relevant cases are the actions that are performed by the active OS and that modify the memory; for each such action, the property follows from its pre and postconditions, and from the definition of valid state. OS Isolation OS isolation is a 2-safety property [17, 18], cast in terms of two executions of the system, and is closely related to the non-influence property studied by Oheimb and coworkers [34, 99]. OS isolation captures the intuition that the behavior of any OS does not depend on other OSs states, and is expressed using the notion of equivalence w.r.t. an operating system osi. Formally, two states s and s′ are osi-equivalent, denoted s ≡osi s′ , if the following conditions are satisfied: • osi is the active OS in both states and the processor mode is the same, or the active OS is different to osi in both states; • osi has the same hypercall in both states, or no hypercall in both states; • the current page tables of osi are the same in both states; • all page table mappings of osi that maps a virtual address to a RW page in one state, must map that address to a page with the same content in the other; • the hypervisor mappings of osi in both states are such that if a given physical address maps to some RW page, it must map to a page with the same content on the other state. All conditions have a straightforward interpretation in our model. The first condition is formalized by the proposition: (s.active os = osi ∧ s′ .active os = osi ∧ s.aos activity = s′ .aos activity) ∨ 6 osi ∧ s′ .active os = 6 osi) (s.active os =

(3.15)

The following two conditions are captured by the proposition: s.oss[osi] = s′ .oss[osi]

(3.16)

Finally, the last two conditions are formally represented by the sentence: (os equivalent P T pages s s′ osi) ∧ (os equivalent hyper mapping s s′ osi)

(3.17)

66

Formally verifying security properties in an idealized model of virtualization

where os equivalent P T pages and os equivalent hyper mapping are formally defined in http://www.fing.edu.uy/inco/grupos/gsi/proyectos/virtualcert.php. In parhyp ′ ′ ticular, os equivalent hyper mapping s s′ osi holds iff s ∼hyp osi s and s ∼osi s, where: s1 ∼hyp osi s2 iff ∀ (pa : padd) (pg : page), (∃ (ma : madd), s1 .hypervisor[osi][pa] = ma ∧ s1 .memory[ma] = pg) → is RW (pg.page content) → ∃ (ma′ : madd), s2 .hypervisor[osi][pa] = ma′ ∧ s2 .memory[ma′ ] = pg

(3.18)

Note that we cannot require that memory contents be the same in both states for them to be osi-equivalent, because on a page pin action, the hypervisor can assign an arbitrary (free) machine address to the OS, so we consider osi-equivalence without taking into account the actual value of the machine addresses assigned. In particular, two osi-equivalent states can have different page table memory pages, which contain mappings from virtual to arbitrary machine addresses, but such that the contents of such an arbitrary machine address be the same on both states, if it corresponds to a RW page. This definition bears some similarity with notions of indistinguishable states used for reasoning about non-interference in object-oriented languages [100]. OS isolation states that osi-equivalence is preserved under execution of any action, and is formalized as a “step-consistent” unwinding lemma, see [101]. Lemma 3.3.3. ∀ (s1 s′1 s2 s′2 : State) (a : Action) (osi : os ident), ′ ′ a ′ a ′ s1 ≡osi s2 → s1 ֒− → s1 → s2 ֒− → s2 → s1 ≡osi s2

(3.19)

The proof of OS isolation relies on write isolation, on Lemma 3.3.4, and on an isolation lemma for the case where osi is the active OS of both states s1 and s2 . The next lemma formalizes a “locally preserves” unwinding lemma in the style of [101], stating that the osi-component of a state is not modified when another operating system is executing. Lemma 3.3.4. ∀ (s s′ : State) (a : Action) (osi : os ident), ′ a ′ ¬ os action(s, a, osi) → s ֒− → s → s ≡osi s

(3.20)

where os action(s, a, osi) holds if, in the state s, osi is the active and running OS and therefore is executing action a, or otherwise the hypervisor is executing the action a on behalf of osi. Extensions to traces All isolation properties extend to traces, using coinductive reasoning principles. In particular, the extension of OS isolation to traces establishes a non-influence property [34]. Formally, we define for each operating system osi a predicate same os actions stating that two steps have the same set of actions w.r.t. osi: concretely, if for all i the actions in t1 [i] and t2 [i] are the same os action for osi, or both are arbitrary actions not related to osi, then same os actions(osi, t1 , t2 ) holds.

3.4 Availability

67

Lemma 3.3.5. ∀ (t1 t2 : T race) (osi : os ident), same os actions(osi, t1 , t2 ) → (t1 [0] ≡osi t2 [0]) → (≡osi , t1 , t2 )

(3.21)

For technical reasons related to the treatment of coinductive definitions in Coq (specifically the need for corecursive definitions to be productive), our formalization of non-influence departs from common definitions of non-interference and non-influence, which rely on a purge function that eliminates the actions that are not related to osi. One can however define an erasure function erase that replaces actions that are not related to osi by silent actions, and prove for all traces t that (≡osi , t, erase(osi, t))

3.4

(3.22)

Availability

An essential property of virtualization platforms is that all guest operating systems are given access to the resources they need to proceed with their execution. In this section, we establish a strong fairness property, showing that if the hypervisor only performs chmod actions whenever no hypercall is pending, then no OS blocks indefinitely waiting for its hypercalls to be attended. The assumption on the hypervisor is satisfied by all reasonable implementations of the hypervisor; one possible implementation that would satisfy this restriction is an eager hypervisor which attends hypercalls as soon as it receives them and then chooses an operating system to run next. If this is the case, then when the chmod action is executed, no hypercalls are pending on the whole platform. Formally, the assumption on the hypervisor is modelled by considering a restricted set of execution traces in which the initial state has no hypercall pending, and in chmod actions can only be performed whenever no hypercall is pending. Then, the strong fairness property states that: if the hypervisor returns control to guest operating systems infinitely often, then infinitely often there is no pending hypervisor call. Lemma 3.4.1. ∀ (t : T race), ¬ hcall(t[0]) → (chmod nohcall, t) → (♦ ¬ hyper running, t) → (♦ ¬ hcall, t)

(3.23)

where hcall and chmod nohcall respectively denote that there is an hypercall pending and that chmod actions only arise when no hypercall is pending. The proof of the strong fairness property proceeds by co-induction and relies on showing that ¬ hyper running(s) → ¬ hcall(s) is an invariant of all traces that satisfy the hypothesis of the lemma. Note that our strong fairness property is independent of the scheduler: in particular, the hypothesis (♦ ¬ hyper running, t) does not guarantee that each operating system will be able to execute infinitely often. Further restricting the implementation of the hypervisor so as to guarantee that the hypervisor is fair to each guest operating system is left for future work.

68

3.5

Formally verifying security properties in an idealized model of virtualization

Extension of the model with cache and TLB

In this section, we present an extension of the idealized model of virtualization, introduced in Section 3.2, that features cache and TLB. We show the new formulation of the set of state and the semantics of the actions of state transformers. We start by providing some background on cache managment, according to [102].

3.5.1

Cache management

In order to reduce the overhead necessary to access memory values, CPUs use faster storage devices called caches to store a copy of the data from main memory. When the processor needs to read a memory value, it fetches the value from the cache if it is present (cache hit), or from memory if it is not (cache miss). In systems with virtual memory, the TLB is analogously used to speed up the translation from virtual to physical memory addresses. In the event of a TLB hit, the physical address corresponding to a given virtual address is obtained directly, instead of searching for it in the process page table (which resides in main memory). The cache may potentially be accessed either with a physical address (physically indexed cache) or a virtual address (virtually indexed cache). There are, in turn, two variants of virtually indexed caches: those where the entries are tagged with the virtual address (virtually indexed, virtually tagged or VIVT cache) and those which are tagged with the corresponding physical address (virtually indexed, physically tagged or VIPT cache). There exist several alternatives policies for implementing cache content management, in particular concerning the update and replacement of cache information. A replacement policy is one that specifies the criteria used to remove a value when the cache is full and a new value needs to be stored. Among the most common policies we find those that specify that the value to be removed is either the least recently used value (LRU), the most recently used (MRU) or the least frequently used (LFU). In our model we specify an abstract replacement policy which can be refined to any of the three policies just described. A write policy specifies how the modification of a cache entry impacts the memory structures: a write-through policy, for instance, requires that values are written both in the cache and in the main memory. A write-back policy, on the other side, requires that values are only modified in the cache and marked dirty, and updates to main memory are performed when a dirty entry is removed from the cache. In this work we provide a model of a VIVT cache and TLB (as in Xen on ARM [103]), for a write-through policy.

3.5.2

States

Figure 3.3 shows a diagram of the extended memory model of the platform. It involves the already presented three types of addresses. The figure also shows the cache and the TLB. The cache is indexed with virtual addresses and holds a (partial) copy of the memory pages. The TLB is used in conjunction with the current page table of the active OS to map virtual to machine addresses. The type of states of the platform is defined as an extension of the set State of Section 3.2.2 including two additional components that model the cache and the TLB. Formally,

3.5 Extension of the model with cache and TLB

69 

    

    

               

       

       

#$ 









!   

   



  



   

"    

   

     

 

   



Figure 3.3: Memory model of the platform with cache and TLB

def

State = [[ oss active os mode activity hypervisor memory cache tlb

: oss map, : os ident, : exec mode, : os activity, : hypervisor map, : machine memory, : cache vivt, : tlb struct ]]

(3.24)

Thus, the remaining two components of the state, the cache and the tlb, are modelled as partial maps, whose domains are bounded in size with positive fixed constants size cache and size tlb: def

cache vivt = vadd 7→size cache page def

tlb struct = vadd 7→size tlb madd

(3.25)

The notion of valid state is formalized by a predicate valid state that captures essential properties of the platform and extends the one presented in Section 3.2.2. The following are examples of properties verified by valid states that relate to cache and TLB: i) the size of the cache and TLB mappings does not exceed their bound; ii) if pg is associated to va in the data cache, then va must be translated to some machine address ma in the active memory mapping, and s.memory[ma] = pg; iii) if va is translated into ma according to the TLB, then the machine address ma is associated to va in the active memory mapping; where the active memory mapping is the current memory mapping of the active OS. Notice that these conditions are necessary to ensure the main memory has been updated according to the write-through policy.

70

Formally verifying security properties in an idealized model of virtualization

3.5.3

Actions semantics

As in Section 3.2.3, the behaviour of actions in the extended model is specified by a precondition P re and by a postcondition P ost. For instance, Figure 3.4 provides the axiomatic semantics of the write action.

def

P re s (write va val) = ∃ (ma : madd), va mapped to ma(s, va, ma) ∧ is RW (s.memory[ma].page content) ∧ s.activity = running os accessible(va) ∧ def

P ost s (write va val) s′ = ∃ (ma : madd), va mapped to ma(s, va, ma) ∧ let pg := s.memory[ma] in let new pg := hRW (Some val), pg.page owned byi in  memory := s.memory[ma := new pg], cache := cache add(f ix cache syn(s, s.cache, ma), va, new pg),  s′ = s ·  tlb := tlb add(s.tlb, va, ma) Figure 3.4: Axiomatic specification of action write

The precondition of the action write va val says that there exists a machine address ma such that va is associated to it (va mapped to ma) and that the page associated to it in the memory is readable/writable (is RW ); that the guest OS activity must be running; and that va must be accessible by the active guest OS (os accessible). Its postcondition sets up that the only variations in the state after executing this action can be produced in the value of the page associated to ma in memory, and in the values stored in the cache and the TLB. It is not hard to see that, as the cache uses a writethrough policy, both the memory and the cache are updated when a write is performed. A cache c2 is the result of updating a cache c1 with a pair va and pg, written c2 = cache add(c1 , va, pg), iff pg = c2 [va] ∧ ∀ (va′ : vadd) (pg ′ : page), va 6= va′ → pg ′ = c2 [va′ ] → pg ′ = c1 [va′ ]

(3.26)

The definition of c2 = tlb add(c1 , va, ma) is analogous. Moreover, in order to avoid aliasing problems we fix synonyms before adding a new entry into the cache using the function f ix cache syn. The result of f ix cache syn(s, c1 , ma) is a cache c2 whose indexes (virtual addresses) are translated to machine addresses ma′ which differ from ma in s. We recall that we are modeling a VIVT cache.

3.6 Verified implementation Action write va val

new tr va pa

lswitch untr osi pa

page unpin untr osi pa

Failure s.aos activity 6= running ¬ va mapped to ma(s, va, ma) ¬ os accessible(va) ¬ is RW (s.memory[ma].page content) s.aos activity 6= running ¬ os accessible(va) ¬ trusted os(osi) ¬ page of OS(s.active os, pa, ma) s.aos activity 6= waiting trusted os(osi) ¬ is P T (s.memory[ma].page content) ¬ lswitch hypercall(s.oss[osi].hcall) ¬ page of OS(s.activeos, pa, ma) s.aos activity 6= waiting trusted os(osi) ¬ page unpin hypercall(s.oss[osi].hcall) ¬ pa not curr page(s, s.oss, pa) s.hypervisor[osi][pa] 6= ma ¬ no va mapped to ma(s, osi, ma)

71 Error Code wrong os activity invalid vadd no access va os wrong page type wrong os activity no access va os os trust f ailure wrong owner wrong os activity os trust f ailure wrong page type wrong pending hcall wrong owner wrong os activity os trust f ailure wrong pending hcall wrong curr page add invalid madd invalid vadd

Table 3.2: Preconditions and error codes

3.6

Verified implementation

In this section we present an extension of the model with execution errors. We describe the executable specification and show that it constitutes a correct implementation of the behavior specified by the idealized model, with cache and TLB.

3.6.1

Error management

There can be attempts to execute an action on a state that does not verify the precondition of that action. In the presence of one such situation the system answers with a corresponding error code. These error codes are defined in our model by the enumerated type ErrorCode. We define the relation between an error code and the unfulfilled precondition of an action with the predicate ErrorM sg. Formally, ErrorM sg : State → Action → ErrorCode → P rop

(3.27)

where ErrorM sg s a ec means that the execution of the action a in the state s generates the error ec. In Table 3.2 we show some examples about error codes associated to unverified preconditions of some actions of our model. Notice that in the case of the write action, for instance, to each of the propositions that compose the precondition of that action there corresponds an element of ErrorCode that indicates the failure of the state s to satisfy that proposition. Executions with error management Executing an action a over a state s produces a new state s′ and a corresponding answer a/r ′ r (denoted s ֒− −→ s ), where the relation between the former state and the new one is

72

Formally verifying security properties in an idealized model of virtualization

given by the postcondition relation P ost. valid state(s)

P re(s, a)

P ost(s, a, s′ )

a/ok ′ s ֒− −−→ s

valid state(s)

ErrorM sg(s, a, ec)

(3.28)

a/error ec s ֒− −−−−−→ s

Whenever an action occurs for which the precondition holds, the (valid) state may change a/ok ′ in such a way that the action postcondition is established. The notation s ֒− −−→ s may be read as the execution of the action a in a valid state s results in a new state s′ . However, if the precondition is not satisfied, then the state s remains unchanged and the system answer is the error message determined by the relation ErrorM sg. Formally, the possible answers of the system are defined by the following type: def

Response = ok : Response | error : ErrorCode → Response

(3.29)

where ok is the answer resulting from a successful execution of an action. One-step execution with error management preserves valid states, that is to say, the state resulting from the execution of an action is also a valid one. Lemma 3.6.1. ∀ (s s′ : State)(a : Action)(r : Response), a/r ′ ′ valid state(s) → s ֒− −→ s → valid state(s )

3.6.2

(3.30)

Executable specification

The executable specification of the hypervisor has been written using the Coq proof assistant and it ultimately amounts to the definition of functions that implement action execution. The functions have been defined so as to conform to the axiomatic specification of action execution as provided by the idealized model. The implementation of the hypervisor consists of a set of Coq functions, such that for every predicate involved in the axiomatic specification of action execution there exists a function which stands for the functional counterpart of that predicate. An important characteristics of our formalization is that the definition of state that is used for defining the executable semantics of the hypervisor is exactly the same as the one introduced in the idealized model. This simplifies the formal proof of soundness between the inductive and the functional semantics of the hypervisor. The execution of the virtualization platform consists of a (potentially infinite) sequence of action executions starting in an (initial) platform state. The output of the execution is the corresponding sequence of memory states (the trace of execution) obtained while executing the sequence of actions. Action execution The execution of actions has been implemented as a step function, that given a memory state s and an action a invokes the function that implements the execution of a in s,

3.6 Verified implementation

73

which in turn returns an object res of type Result: def

Result = [[ resp : Response, st : State ]]

(3.31)

where res.resp is either an error code ec, if the precondition of the actions does not hold in state s, or otherwise the value ok, and the state res.st represents the execution effect. The step function acts basically as an action dispatcher. Figure 3.5, which shows the structure of the dispatcher, details the branch corresponding to the dispatching of action write, which is the action we shall use along this section to illustrate the working of the implementation. The functions invoked in the branches, like write saf e, Definition step s a := match a with | ... ⇒ ... | W rite va val ⇒ write saf e(s, va, val) | ... ⇒ ... end. Figure 3.5: The step function are state transformers whose definition follows this pattern: first it is checked whether the precondition of the action is satisfied in state s, and then, if that is the case, the function that implements the execution of the action is invoked, otherwise, the state s, unchanged, is returned along with an appropriate response. In Figure 3.6 we show the definition of the function that implements the execution of the write action. The Coq code of this function, together with that of the remaining functions, can be found in [104]. Definition write saf e (s : state) (va : vadd) (val : value) : Result := match write pre(s, va, val) with | Some ec ⇒ herror(ec), si | N one ⇒ hok, write post(s, va, val)i end. Figure 3.6: Execution of write action The function write pre is defined as the nested validation of each of the properties of the precondition (see Figure 3.7). The function write post, shown in Figure 3.8, implements the expected behavior of the write action: when a new value has to be written in a certain virtual address va, first it must be checked whether va is in the cache (i.e. is an index of the cache). If that is the case, then the function updates both the cache and the memory, because it implements a write-through policy. Otherwise, i.e. if the virtual address va is not already in the cache, the machine address associated to va has to be determined in order to write the new value in memory. First, the TLB is inspected to check whether va has already been translated. If there is a translation of va in the TLB, then the machine address is used to update the memory and the new entry hva, new pgi is added to the cache. If there is no translation of va in the TLB, then the corresponding machine address has to be recovered using the current page table of

74

Formally verifying security properties in an idealized model of virtualization

Definition write pre (s : state) (va : vadd) (val : value) : option ErrorCode := match get os ma(s, va) with | N one ⇒ Some invalid vadd | Some ma ⇒ match page type(s.memory, ma) with | Some RW ⇒ match aos activity(s) with | W aiting ⇒ Some wrong os activity | Running ⇒ if vadd accessible(s, va) then N one else Some no access va os end | ⇒ Some wrong page type end end. Figure 3.7: Validation of write action precondition

the active guest OS. Once that translation has been found, the memory is updated, the new entry hva, new pgi is added to the cache and the corresponding translation of va is added to the TLB.

Cache and TLB update In the axiomatic semantics of cache and TLB management the replacement policy has been left abstract. For the execution semantics we have chosen to implement a simple FIFO replacement mechanism. However, this behavior is encapsulated in the definition of the functions f cache add and f tlb add, which implement cache and TLB replacement, respectively. Therefore, for the implementation of an alternative replacement policy it suffices to modify correspondingly these two functions leaving the rest of the code unchanged. Figure 3.9 shows the definition of the f cache add function: first, it is checked whether the virtual address va is the index of an entry of the cache c (map valid index). If this is the case, it suffices to perform a simple update of c with the page pg (caches are implemented as bounded maps of virtual addresses to machine addresses). Otherwise, the behaviour of the function depends on whether c has room for a new entry or it is full (is f ull cache). If c is full, the cache update, and entry eviction, is handled using the FIFO replacement algorithm (f if o replace). If there is room left for a new entry, then c must be updated following the FIFO replacement algorithm guidelines for adding new entries in the cache (f if o add). The definitions of the replacement and update function for the TLB are analogous.

3.6 Verified implementation

3.6.3

75

Soundness

We proceed now to outline the proof that the executable specification of the hypervisor correctly implements the axiomatic model. It has been formally stated as a soundness theorem and verified using the Coq proof assistant. Theorem 3.6.2. ∀ (s : State) (a : Action), valid state(s) → a/step(s,a).resp s ֒− −−−−−−−−−→ step(s, a).st

(3.32)

The proof of this theorem follows by, in the first place, performing a case analysis on P re(s, a) (this predicate is decidable) and then: if P re(s, a) applying Lemma 3.6.3; otherwise applying Lemma 3.6.5. Lemma 3.6.3. ∀ (s : State) (a : Action), valid state(s) → P re(s, a) → a/ok s ֒− −−→ step(s, a).st ∧ step(s, a).resp = ok

(3.33)

The proof of Lemma 3.6.3 proceeds by applying functional induction on step(s, a) and then by providing the corresponding proof of soundness of the function that implements the execution of each action. Thus, in the case of the action write we have stated and proved Lemma 3.6.4. This lemma, in turn, follows by performing a case analysis on the result of applying the function write pre on s and the action: if the result is an error code then the thesis follows by contradiction. Otherwise, it follows by the correctness of the function write post. Lemma 3.6.4. ∀ (s : State) (va : vadd) (val : value), valid state(s) → P re(s, (write va val)) → P ost(s, (write va val), write post(s, va, val))

(3.34)

As to Lemma 3.6.5, the proof also proceeds by first applying functional induction on step(s, a). Then, for each action a, it is shown that if ¬P re(s, a) the execution of the function that implements that action yields the values returned by the branch corresponding to the case that the function that validates the precondition of the action a in state s fails, i.e., an error code ec and the (unchanged) state s. Lemma 3.6.5. ∀ (s : State) (a : Action), valid state(s) → ¬P re(s, a) → ∃ (ec : ErrorCode), step(s, a).st = s ∧ step(s, a).resp = error(ec) ∧ ErrorM sg(s, a, ec)

(3.35)

76

Formally verifying security properties in an idealized model of virtualization

Definition write post (s : state) (va : vadd) (val : value) : state := match s.cache[va] with | V alue old pg ⇒ let new pg := P age (RW c (Some val)) (page owned by old pg) in let val ma := va mapped to ma system(s, va) in match val ma with | V alue ma ⇒ s · [ mem := s.memory[ma := new pg], cache := f cache add(f ix cache syn(s, s.cache, ma), va, new pg) ] | Error ⇒ s end | Error ⇒ match s.tlb[va] with | V alue ma ⇒ match s.memory[ma] with | V alue old pg ⇒ let new pg := P age (RW c (Some val)) (page owned by old pg) in s · [ mem := s.memory[ma := new pg], cache := f cache add(f ix cache syn(s, s.cache, ma), va, new pg) ] | Error ⇒ s end | Error ⇒ match va mapped to ma currentP T (s, va) with | V alue ma ⇒ match s.memory[ma] with | V alue old pg ⇒ let new pg := P age (RW c (Some val)) (page owned by old pg) in s · [ mem := s.memory[ma := new pg], cache := f cache add(f ix cache syn(s, s.cache, ma), va, new pg), tlb := f tlb add(s.tlb, va, ma) ] | Error ⇒ s end | Error ⇒ s end end end. Figure 3.8: Effect of write execution Definition f cache add (c : cache vivt) (va : vadd) (pg : page) : cache vivt := if map valid index(c, va) then map add(c, va, pg) else if is f ull cache(c) then f if o replace(c, va, pg) else f if o add(c, va, pg). Figure 3.9: Cache update

3.7 Isolation and transparency in the extended model

3.7

Isolation and transparency in the extended model

Isolation theorems ensure that the virtualization platform protects guest operating systems against each other, in the sense that a malicious operating system cannot gain information about another victim operating system executing on the same platform. In this section we extend the proof of OS isolation from Section 3.3, yielding modifications in some key technical definitions and lemmas below, so that it accounts for errors in execution traces for the model extended with cache and TLB. Finally, we present a machine-checked proof of transparency. Transparency states that the virtualization platform is a correct abstraction of the underlying hardware, in the sense that a guest operating system cannot distinguish whether it executes alone or together with other systems. Transparency is a 2-safety property; its formulation involves an erasure function which removes all the components of the states that do not belong to some fixed operating system. We define an appropriate erasure, establish its fundamental properties, and finally derive transparency.

3.7.1

OS isolation

We have mechanically verified three isolation properties: 1. read isolation: no OS can read memory that does not belong to it; 2. write isolation: an OS cannot modify memory that it does not own; 3. OS isolation: the behavior of an OS does not depend on other OSs states. Read and write isolation are safety properties, whereas OS isolation is a 2-property, i.e. it is cast in terms of two executions of the system. We focus on OS isolation, which is by far the most challenging property. Informally, as presented in Section 3.3, OS isolation states that starting from states with the same information for an operating system osi, osi cannot distinguish between the two traces, as long as it executes the same actions in both. This captures the idea that the execution of osi does not depend on the state or behaviour of the other systems, even in the presence of erroneous executions. Note that there is one particular error (the out of memory error in [104]) that can in principle influence the execution of an operating system, if during its execution the platform runs out of memory. Since we are specifically interested in modelling observations on states (and the cache, in particular), we treat this error as transparent for the executing operating system, and only make sure it does not modify the state. This is consistent with what usually happens in real implementations, where there are no data leaks from the victims when the platform runs out of memory, and the only information an attacker learns is the total memory consumption of the other operating systems in the platform. Additionally it is possible, in this case, to assign to each guest OS a fixed pool of memory from which to allocate, so whether allocation succeeds or fails for one OS doesn’t depend on what any other guest OS does. To formalize OS isolation we use a notion of state equivalence w.r.t. an operating system osi. The definition of osi-equivalence (≡osi ), coincides with the one used in [94] and presented in Section 3.3; in particular, it does not mention the cache and the TLB. However, one can prove that it entails some form of cache equivalence and TLB equivalence on valid states. Formally, we define two valid states s1 and s2 to be cache

77

78

Formally verifying security properties in an idealized model of virtualization

equivalent for osi, written s1 ≡cache s2 , iff either osi is not active in both states, or it is osi active and the caches hold equal values for all accessible virtual addresses va that are in the domain of the cache of both states, i.e. for all virtual address va and pages p1 and p2 : os accessible(va) → s1 .cache[va] = p1 → s2 .cache[va] = p2 → p1 = p2

(3.36)

Note that we do not require that the domains of both caches coincide, as it would invalidate the following lemma. Lemma 3.7.1. ∀ (s1 s2 : State) (osi : os ident), s2 valid state(s1 ) → valid state(s2 ) → s1 ≡osi s2 → s1 ≡cache osi

(3.37)

The notion of TLB equivalence is defined in a similar way. We say that two valid states s1 and s2 are TLB equivalent for osi, written s1 ≡tlb osi s2 , iff either osi is not active in both states, or it is active and for all accessible virtual addresses va that are in the domain of the TLB of both states, if the machine address s1 .tlb[va] holds a page with RW memory content, then if va appears in s2 .tlb, it holds the same page, i.e. for all machine addresses ma1 and ma2 , and page pg: s1 .tlb[va] = ma1 → s1 .memory[ma1 ] = pg → is RW (pg.page content) → s2 .tlb[va] = ma2 → s2 .memory[ma2 ] = pg

(3.38)

and conversely. We have: Lemma 3.7.2. ∀ (s1 s2 : State) (osi : os ident), valid state(s1 ) → valid state(s2 ) → s1 ≡osi s2 → s1 ≡tlb osi s2

(3.39)

We write s1 ≡cache,tlb s2 as a shorthand for s1 ≡osi s2 ∧ s1 ≡cache s2 ∧ s1 ≡tlb osi osi s2 . osi We can now generalize the unwinding lemmas of Section 3.3. The first lemma states that equivalence is preserved by the execution of all actions that do not generate errors. Lemma 3.7.3. ∀ (s1 s′1 s2 s′2 : State) (a : Action) (osi : os ident), s1 ≡osi s2 → os action(s1 , a, osi) → os action(s2 , a, osi) → cache,tlb ′ a/ok ′ a/ok ′ ′ s1 ֒− s2 −−→ s1 → s2 ֒−−−→ s2 → s1 ≡osi

(3.40)

where os action(s, a, osi) denotes that action a is an action successfully executed by the OS osi in the state s; in particular, its execution does not cause an error. Note that an execution that fails does not generate a change in the system state. The second lemma states that execution does not alter the state of non-active OSs, or active OS if it performs an execution that fails. Lemma 3.7.4. ∀ (s s′ : State) (a : Action) (r : Response) (osi : os ident), cache,tlb ′ a/r ′ ¬ os action(s, a, osi) → s ֒− s −→ s → s ≡osi

(3.41)

3.7 Isolation and transparency in the extended model

3.7.2

79

OS isolation in execution traces

The extension to traces of the relation one-step execution with error management is defined as follows: an execution trace is defined as a stream (an infinite list) of states a/r that are related by the transition relation ֒− −→, i.e. an object of the form a0 /r0 a1 /r1 a2 /r2 s0 ֒− −−→ s1 ֒−−−→ s2 ֒−−−→ s3 . . .

(3.42)

a/r In the sequel, we use s ֒− −→ t to denote the trace obtained by prepending the valid a/r execution step s ֒−−→ t[0] to a trace t. We let T race define the type of these traces. Isolation properties are eventually expressed on execution traces, rather than execution steps.

Non-influencing execution (errors) Using the unwinding lemmas previously presented, one can establish a non-influence result in the style of [34]. We define for each operating system osi a predicate same os actions stating that two traces have the same set of actions w.r.t. osi; so that two traces are related iff they perform the same valid osi-actions. Then we define two traces t1 and t2 to be osi-equivalent, written t1 ≈osi,cache,tlb t2 , co-inductively by the following rules: t1 ≈osi,cache,tlb t2

¬ os action(s, a, osi)

a/r (s ֒− −→ t1 ) ≈osi,cache,tlb t2

t1 ≈osi,cache,tlb t2

¬ os action(s, a, osi)

a/r t1 ≈osi,cache,tlb (s ֒− −→ t2 )

t1 ≈osi,cache,tlb t2

os action(s1 , a, osi)

os action(s2 , a, osi)

(3.43) s1 ≡cache,tlb s2 osi

a/ok a/ok (s1 ֒− −−→ t1 ) ≈osi,cache,tlb (s2 ֒−−−→ t2 )

Theorem 3.7.5. ∀ (t1 t2 : T race) (osi : os ident), same os actions(osi, t1 , t2 ) → (t1 [0] ≡osi t2 [0]) → t1 ≈osi,cache,tlb t2

(3.44)

OS isolation formally establishes that two traces are osi-equivalent if they have the same set of osi-actions and if their initial states are osi-equivalent. The proof of OS isolation is based on co-induction principles and on the previous unwinding lemmas. Note that the definition of osi-equivalent traces conveniently generalizes the notion used in Section 3.3, by allowing related traces to differ in the number of actions executed by other OSs, and extends considering executions with error management. In particular, Theorem 3.7.5 states that the OS isolation property introduced in Section 3.3 is also valid in the context of executions that include error handling, considering that an osi-action is an action successfully executed by the OS osi. Though it is left as future work, it is interesting to comment on the validity of isolation properties under other policies. On the one hand, the replacement policy for the cache and the TLB is left abstract in our model, so any reasonable algorithm will preserve these properties (as embodied e.g. in the definition of cache add in Section 3.5.3). On

80

Formally verifying security properties in an idealized model of virtualization

the other hand, we have fixed a write-through policy for the main memory: this policy entails that updates to memory pages are done simultaneously to the cache and main memory, and we have used throughout the development the invariant property that cache data is included in the memory. This inclusion property will not hold if we were to use a write-back policy, in which written entries are marked dirty and updates to main memory are done when a page is removed from the cache. We believe that it remains possible to prove strong isolation properties under the write-back policy, since page values, even if different in memory, will be equal if we consider the cache and memory together. Finally, the flushing policy is assumed to be a total flush on switch and local switch execution. An alternative would be to tag cache (and TLB) entries with the virtual spaces allowed to access the entry. This will not have as much impact on the current model as the write policy, though changes will need to be done to the cache definition to include the tags. Isolation properties would still hold, given correct semantics of access control of cache entries.

3.7.3

Transparency

Functional correctness of a hypervisor is conveniently formalized as a transparency property, stating that, under some specific conditions, a guest operating system cannot distinguish whether it runs on the virtualization platform or operates directly on hardware. In this section, we prove a variant of this property suitable for paravirtualization: namely, a guest OS is unable to distinguish between executing together with other OSs and executing alone on the platform. More precisely, the proved properties establish that given a guest OS osi, the (concurrent) execution of other guest OSs does not interfere with the execution of osi. Intuitively, our formulation of transparency states that for any operating system osi, any execution trace is osi-equivalent to its osi-erased trace, in which all state components not related to osi have been removed, and all actions not performed by osi, or by the hypervisor on behalf of osi, have been turned into silent actions. This approach is similar, but somewhat stronger than OS isolation, as it requires that after erasing all non-osi data, the execution is still valid; i.e. that state validity and action semantics are maintained. More precisely, the erasure s\osi of a state s is obtained by removing all state components that do not belong to osi, and it satisfies the following properties: i) osi is the only OS in the erased environment; ii) if osi was not active on s, the activity of the erased state is waiting, otherwise it is not changed; iii) hypervisor mappings on the erased state should only be defined for osi and be the same as in s; iv) all memory pages whose owner is not osi (including hypervisor owned ones) are freed; v) osi RW pages remain unchanged; vi) non accessible virtual address (i.e. owned by the hypervisor) are removed from all osi PT pages; vii) if osi is active in s, all non accessible virtual addresses are removed from the cache and the TLB; viii) if osi is not active in s, the cache and the TLB get flushed. Note that if osi is not active, no cache (or TLB) entry will belong to osi, so we erase all of the entries, flushing the cache. If osi is active, cache entries will be owned either by osi or by the hypervisor; in the second case, we erase them by removing all entries corresponding to non accessible virtual addresses. The result of these transformations is that only data from osi stays in the erased cache in all cases.

3.7 Isolation and transparency in the extended model

81

def

P re s silent = T rue def

P ost s silent s′ = s′ = s.[cache := ⋆, tlb := ⋆] ∧ valid cache(s′ ) ∧ valid tlb(s′ )

Figure 3.10: Formal specification of the action silent The erasure of a valid state is also valid: Lemma 3.7.6. ∀ (s : State), valid state(s) → valid state(s\osi )

(3.45)

Moreover, the erasure of a state is equivalent to the original state. Since osi is always w the active OS in s\osi , we must consider a weaker notion, denoted ≡osi , that is identical to ≡osi except that it does not require that the two states have the same active OS. Lemma 3.7.7. w

∀ (s : State), valid state(s) → s ≡osi s\osi

(3.46)

Next, erasure must be extended to actions. We silence all actions that are executed by OSs different from osi, and by the hypervisor for itself or for one of those other OSs. All other actions (those executed directly by osi or by the hypervisor on behalf of osi) remain unchanged. We use the a\osi notation for the erasure of an action. Erasure preserves valid step executions. Lemma 3.7.8. a\osi ′ a ′ ∀(s s′ : State)(a : Action), s ֒− → s → s \osi ֒−−−→s \osi

(3.47)

The proof of the lemma hinges upon the “loose” semantics silent action. Informally, the semantics allows for the cache and the TLB to be different in the pre and post states, only requiring that both are still valid, according to the explanation provided in Section 3.5.2. The formal axiomatic semantics is presented in Figure 3.10; for the sake of readbility, we use non-deterministic field update r · [f := ⋆] in the rule—the Coq formalization introduces existential quantification instead. As the erasure of the cache removes every hypervisor entry, a full cache in s will not necessarily be full in s\osi . Moreover, a silenced action that adds an entry to the cache in the original execution (a read hyper action, for instance) will remove another entry to make space. In the erased execution, the initial state will not be full, but after the effects of silent, the resulting state will have less entries in the cache. This is not an issue, however, because the valid state invariance and Lemma 3.7.6 imply that caches in both states will be consistent: all memory accesses by osi will obtain the same data either from the cache or from main memory. Lemmas 3.7.6 and 3.7.8 entail that the erasure of a trace is another trace. The desired transparency property can then be expressed as: given an osi and any valid trace of the system, the erasure of the trace is another trace osi-equivalent to the original one. w Formally, weak equivalence ≈osi on traces is defined as a straightforward generalization w of weak equivalence ≡osi on states.

82

Formally verifying security properties in an idealized model of virtualization

Theorem 3.7.9. w

∀ (osi : os ident) (t : T race), t ≈osi t\osi

3.8

(3.48)

Related work

Thanks to recent advances in verification technology, it is now becoming feasible to verify formally realistic specifications and implementations of operating systems. A recent account of existing efforts can be found in the surveys [105, 106]. The Microsoft Hyper-V verification project focuses on proving the functional correctness of the deployed implementation of the Hyper-V hypervisor [14, 15] or of a simplified, baby, implementation [107]. Using VCC, an automated verifier for annotated C code, these works aim to prove that the hypervisor correctly simulates the execution of the guest operating systems, in the sense that the latter cannot observe any difference from executing on their own on a standard platform. At a more specific level, these works provide a detailed account of many components that are not considered in our work, including page tables [108] and devices [109]. The L4.verified project [16] focuses on proving that the functional correctness of an implementation of seL4, a microkernel whose main application is as a hypervisor running paravirtualized Linux. The implementation consists of approximately 9kLOC of C and 600 lines of assembler, and has been shown to be a valid refinement of a very detailed abstract model that considers for example page tables and I/O devices. Subsequent machine-checked developments prove that seL4 enforces integrity, authority confinement [32] and intransitive non-interference [110, 33]. The formalization does not model cache. Recently, the Verve project [111] has initiated the development of a new operating system whose type safety and memory safety has been verified using a combination of type systems and Hoare logic. Outside these projects, several projects have implemented small hypervisors, to reduce the Trusted Computing Base, or with formal verification in mind [112], but we are not aware of any completed proof of functional correctness or security. Our work is also related to formal verification of isolation properties for separation kernels. Earlier works on separation kernels [113, 114, 115] formalize a simpler model where memory is partitioned a priori. In contrast, our model allows the partition to evolve and comprises three types of addressing modes and is close to those of virtualization platforms, where memory requested by the OSs is dynamicaly allocated from a common memory pool. Dealing with this kind of memory management adds significant complications in isolation proofs. Our work is also inspired by earlier efforts to prove isolation for smartcard platforms. Andronick, Chetali and Ly [116] use the Coq proof assistant to establish that the JavaCard firewall mechanism ensures isolation properties between contexts—sets of applications that trust each other. Oheimb and co-workers [34, 99] independently verify isolation properties for Infineon SLE 88 using the Isabelle proof assistant. In particular, their work formalizes a notion of non-influence that is closely related to our isolation properties. Moving away from OS verification, many works have addressed the problem of relating inductively defined relations and executable functions, in particular in the context of

3.9 Summary programming language semantics. For instance, Tollitte et al [117] show how to extract a functional implementation from an inductive specification in the Coq proof assistant. Similar approaches exist for Isabelle, see e.g. [118]. Earlier, alternative approaches such as [119, 120] aim to provide reasoning principles for executable specifications.

3.9

Summary

Hypervisors allow multiple guest operating systems to run on shared hardware, and offer a compelling means of improving the security and the flexibility of software systems. In this chapter we have formalized in the Coq proof assistant an idealized model of a hypervisor, and we have established (formally) that the hypervisor ensures strong isolation properties between the different operating systems, and guarantees that requests from guest operating systems are eventually attended. In contrast to most prominent projects on operating systems verification, where such guarantees are proved directly on concrete implementations of hypervisors, the machine-checked model of virtualization presented in this work abstracts away most implementations issues and specifies the effects of hypervisor actions axiomatically, in terms of preconditions and postconditions. Unfortunately, seemingly innocuous implementation issues are often relevant for security. Incorporating the treatment of errors into the model has been therefore an important step towards strengthening the isolation theorems proved initially. In this chapter, we also have extended our earlier model with errors, for a memory system with cache and TLB, and we have proved that isolation theorems still apply. In addition, we have provided an executable specification of the hypervisor, and we have proved that it correctly implements the axiomatic model. The executable specification constitutes a first step towards a more realistic implementation of a hypervisor, and provides a useful tool for validating the axiomatic semantics developed. In [104] we derive two certified hypervisor implementations, using the extraction mechanism of Coq, in functional languages Haskell and OCaml. Finally, we have showed that virtualized platforms are transparent, i.e. a guest operating system cannot distinguish whether it executes alone or together with other guest operating systems on the platform.

83

Chapter 4

A formal specification of the DNSSEC model 4.1

Introduction

The Domain Name System (DNS) [35, 36] constitutes a distributed database that provides support to a wide variety of network applications. The database is indexed by domain names. A domain name represents a path in a hierarchical tree structure, which in turn constitutes a domain name space. Each node of this tree is assigned a label, thus, a domain name is built as a sequence of labels separated by a dot, from a particular node up to the root of the tree. A distinguishing feature of the design of DNS is that the administration of the system can be distributed among several (authoritative) name servers. A zone is a contiguous part of the domain name space that is managed by a set of authoritative name servers. Then, distribution is achieved by delegating part of a zone administration to a set of delegated sub-zones. DNS is a widely used scalable system, but it was not conceived with security concerns in mind, as it was designed to be a public database with no intentions to restrict access to information. Nowadays, a large amount of distributed applications make use of domain names. Confidence on the working of those aplications depends critically on the use of trusted data: fake information inside the system has been shown to lead to unexpected and potentially dangerous problems. Already in the early 90’s serious security flaws were discovered by Bellovin and eventually reported in [37]. Different types of security issues concerning the working of DNS have been discussed in the literature [38, 37, 39, 40, 41, 42]. Identified vulnerabilities of DNS make it possible to launch different kinds of attacks, such as cache poisoning [45, 46]. DNSSEC [47, 48, 49] is a suite of Internet Engineering Task Force (IETF) specifications for securing information provided by DNS. More specifically, this suite specifies a set of extensions to DNS which are oriented to provide mechanisms that support authentication and integrity of DNS data but not its availability or confidentiality. In particular, the security extensions were designed to protect resolvers from forged DNS data, such as the one generated by DNS cache poisoning, by digitally signing DNS data using public-key cryptography. The keys used to sign the information are authenticated via a chain of trust, starting with a set of verified public keys that belong to the DNS root zone, which is the trusted third party. 85

86

A formal specification of the DNSSEC model

4.1.1

A primer on the vulnerabilities of DNS

In this section we provide some background on DNS, its vulnerabilities and the security extensions specified in DNSSEC. The administration of DNS can be distributed among several (authoritative) name servers. DNS defines two types of server: • Nameservers: which are authoritative servers that manage data of a contiguous part of the domain name space and where the master files reside. For redundancy sake, primary and secondary nameservers are provided for each zone. • Resolvers: which are standard programs used to interact with the nameservers to extract information in response to client requests. The process of obtaining information from DNS is called name resolution. Each server is initialized with the contact information of some authoritative servers of the root zone. Moreover, the root servers know how to contact the authoritative name servers of second level (e.g. the domains com, net, edu). Second level servers know the information of third level servers, and so on. By these means, the requestor can proceed refining the sought response by walking the tree structure, contacting different servers and getting “closer” to the answer after each referral. Servers store the results of previous queries in their caches in order to speed up the resolution process, but as the mapping of names evolves, cached data has a limited time of validity. Authoritative servers attach, for instance, a Time To Live atribute (TTL) to this stored data, indicating when it should be removed from the cache. For more details concerning the working of DNS we refer to [121, 35, 36]. In an early work, Cheung and Levitt [122] discuss security issues of DNS and provide formal basis to model and prove security mechanisms that can be added to the system to prevent two specific problems: • Failure to authenticate DNS responses: The message authentication mechanism used by DNS is weak. DNS checks if a received response matches a previously asked query only by checking if the Id attached in the query’s header matches with the Id of the pointed response. In this way, if an attacker can predict the used query Id and his answer reaches before the real one does (as if a name server receives multiple responses for its query, it uses the first one), it will be able to send a forged response. • Cache poisoning attacks: An attacker can take advantage of the lack of authentication mechanisms to intentionally formulating misleading information, injecting bogus information into some server DNS cache. Having cached this information, the cheated DNS server is likely to get a Denial of Service (DoS), if the attacker sends a negative response that could actually be resolved; or in the worst case, the attacker can masquerade as a trusted entity, and then be able to intercept, analyze and intentionally corrupt the communication. Cache poisoning attacks exploits a flaw in DNS, namely, the weak mechanisms used to ensure the authentication of data origin. As one of the goals of the DNS security extensions was to solve these vulnerabilities, we have put special focus in developing a formal specification that allows us to reason on the effectiveness of those extensions regarding impersonation and cache poisoning attacks.

4.2 Formalization of the DNSSEC model The extensions introduced by DNSSEC to improve the security of DNS require the combined application of mechanisms that make it possible to i) sign data, using public key cryptography, within zones ii) generate a chain of trust along the DNSSEC tree iii) perform key exchange within parent-child zones, and regular key rollover routines. The security extensions provide origin authentication and integrity assurance services for DNS data, including mechanisms for public key distribution and authenticated denial of existence of DNS data. However, respecting the principle assumed in the design of DNS that all data in the system must be visible, DNSSEC is not designed to provide confidentiality [47]. The specification of the security extensions does not prevent the interaction of secure name servers and resolvers with non-secure ones. However, any communication that involves an unsecure server results in the loss of all DNSSEC security related capabilities. For this reason, in our model it is assumed that every server is security aware.

4.1.2

Contents of the chapter

We present here a minimalistic specification of a DNSSEC model which provides the grounds needed to formally state and verify security properties concerning the chain of trust of the DNSSEC tree. The specification puts forward an abstract formulation of the behavior of the protocol and the corresponding security-related events, where security goals, such as the prevention of cache poisoning attacks, can be given a formal treatment. This chapter builds upon the previously published paper [123]. The rest of the chapter is organized as follows. In Section 4.2 we develop the elements that are most relevant for our formal model. Section 4.3 presents some of the verified properties and outlines the corresponding proofs. Finally, Section 4.4 concludes with a summary of the chapter.

4.1.3

Formalization

The full development is available for download at http://www.fing.edu.uy/inco/grupos/gsi/sources/dnssec

4.2

Formalization of the DNSSEC model

In this section, we present an abstract model of DNSSEC. First, we introduce auxiliary definitions, to proceed then to define the set of states and a notion of valid state. Finally, we define the semantics of security-related events as state transformers and provide a formal definition of their execution.

4.2.1

Formalizing states

The state of a DNSSEC system consists of a collection of components that we now proceed to describe. Servers distributed globally are represented as objects of an abstract type Process.

87

88

A formal specification of the DNSSEC model

Secure resource records A Resource Record (RR) is the basic unit of information used in DNS. A structure of this kind is defined by six fields: i) the field N AM E defines the domain name that applies to the given RR, ii) the field T Y P E indicates the type of resource record. Figure 4.1 depicts the most common ones, iii) the field CLASS is used to identify the protocol group to which the record belongs. In the sequel we shall only make use of the class IN, which is the one of interest to people using TCP/IP software, since it stands for Internet, iv) T T L stands for Time To Live; it is primarily used by resolvers, and specifies how long a resource record should be cached before discarding it, v) the field RDLEN GT H is an unsigned integer that specifies the length of the RDATA field, vi) the field RDAT A contains the data of the resource record. The data field is defined differently for each type and class of data. Type A CNAME HINFO MX NS PTR SOA

Description Internet Address Canonical Name (nickname pointer) Host Information Mail Exchanger Name Server Pointer Start Of Authority

Figure 4.1: Types of Resource Records Formally, a resource record is defined as an object of the following record type: def

RR = [[ Rdname : DN ame, Rtype : RRT ype, RClass : RRClass, Rttl : T T L, RDL : RDLength, Rrdata : RData ]]

(4.1)

A set of resource records that share the same domain name, type and class is formalized as an object of the following type: def

RRset = set RR

(4.2)

To implement the proposed security extensions, DNSSEC introduces additional securityrelated resource records: i) for origin authentication DNSSEC provides a hierarchical public key infrastructure (PKI), which allows resolvers to obtain the DNSSEC key of a zone and use it for authenticating signed data belonging to that zone. To support this PKI, three resource records were introduced: RRSIG, DNSKEY, and DS, ii) for assuring integrity of data each zone signs all the RRsets over it is authoritative. In every transmission, RRSIGs are transmitted along with the replied RRsets, and by these means, when a transmission is received, data integrity can be verified, iii) for authenticated denial of existence, Next Secure (NSEC) resource records are provided. They list all of the existent RRs belonging to an owner name within an authoritative

4.2 Formalization of the DNSSEC model

89

zone, making it possible to verify the non-existence of a RR, by comparing against the RR list of its owner name. Figure 4.2 describes the security oriented new resource records. RR RRSIG DNSKEY DS

NSEC

Description Signature over RRset made using private key Public key, needed for verifying a RRSIG Delegation Signer, pointer for building chains of trust. The Parent DNSKEY signs the Parent DS, Parent DS signs the Child DNSKEY, and so forth, providing a mechanism to verify origin integrity from a domain name up to the root servers Used to provide an authenticated non-existence of data. Indicates which name is the next one in the zone and which type codes are available for the current name Figure 4.2: Resource records introduced by DNSSEC

The type of a resource record, which includes the specific types defined by the DNSSEC specifications, is defined by the following enumerated type: def

RRType =

A | PTR | NS | CN AM E | MX | SOA | HIN F O | RRSIG | DNSKEY | DS | NSEC

(4.3)

We formalize the notion of secure (set of) RR by the following two types: def

SecRR = [[ RR′ : RR, Rsign : RR ]]

(4.4)

def

SecRRset = [[ RRset′ : RRset, RRsign : RR ]]

(4.5)

where Rsign and RRsign contain, respectively, the signature corresponding to RR′ and to RRset′ . The distributed database DNS manages a distributes database, which is indexed by a tuple (dname, type, class) of type Idx: def Idx = [[ Idname : DN ame, (4.6) Itype : RRT ype, IClass : RRClass ]]

90

A formal specification of the DNSSEC model

The range of this database is a set of secure RRs. A DNS database is thus formalzed as an object of the function type: def

DbMap = Idx → SecRRset

(4.7)

DNS message A DNS(SEC) message consists of a header and four additional sections [43], as illustrated in Figure 4.3. The HEADER contains an identification (Id) field, which is used to match

Figure 4.3: DNS Message Format an answer to its corresponding query. The QUESTION section consists of a target domain name (QNAME ), a type (TYPE ), and a class (QCLASS ). A query to find out, for instance, the IP address of the host h1.fing.edu.uy will have QNAME=h1.fing.edu.uy, QTYPE=A and QCLASS=IN. The ANSWER section contains RRs which directly answer the query. The AUTHORITY section carries RRs which describe other authoritative server (e.g. may contain RRs to refer the querier to other name servers during the resolution process). The ADDITIONAL section carries RRs that may be helpful for using RRs information of other sections (e.g. may contain A RRs to provide the IP address for the RRs listed in the authority section). We define the DN SM essage type as follows: def

DNSMessage = [[ Hdr : Header, Q : Idx, Ans : SecRRset, Auth : SecRRset, Additional : SecRRset ]]

(4.8)

Pending submissions Pending submissions, that is to say, requests or answers, possibly triggered by a received answer, which are waiting to be delivered, are defined as objects of the following type: def

SendQR = [[ SendQ : set Inf oM sg, SendR : set Inf oM sg ]]

(4.9)

An object of the type Inf oM sg represents the required information needed to be sent. def

InfoMsg = [[ M F rom : P rocess, M T o : P rocess, M M sg : DN SM essage ]]

(4.10)

4.2 Formalization of the DNSSEC model

91

System keys DNSSEC security is based in the use of public key cryptography. Each DNS server owns two keys, namely, a Zone Signing Key (ZSK) and a Key Signing Key (KSK). A ZSK key is used to sign every RRset within a zone, generating the RRSIG records. A KSK key is used especifically to sign the DNSKEY RRset and generate its corresponding RRSIG. To access the system’s DNSKEYS, the following record have been defined: def

keySet = [[ key : RR, ksign : RR ]]

(4.11)

where key is a DNSKEY and ksign its corresponding RRSIG signature. Each server has its corresponding pair of keys: def

keys = [[ zsk : keySet, ksk : keySet ]]

(4.12)

Delegations A delegation describes a father-child relationship between DNSSEC servers. As shown in Figure 4.2, it helps building the DNSSEC chain of trust. To model delegation in our specification we have defined the following function: def

DSpDB = P rocess → DSp where:

(4.13)

def

DSp = [[ T srv : P rocess, rrDS : SecRR ]]

(4.14)

In words, we have defined a function that maps Parent servers with its corresponding Childs to whom they trust, along with the digest RR to prove it. This is an essential part of the model, as it will allow us to reason over the effectiveness of the chain of trust. System state To reason about the DNSSEC security system most details of the state may be abstracted. States are modeled by a record type with nine components: def

State = [[ Servers : set P rocess, T rustedServers : set P rocess, ServerKeys : P rocess → keys, Delegations : P rocess → DSpDB, P arents : P rocess → DSp, viewAuth : P rocess → DbM ap, viewCache : P rocess → DbM ap, P endingQueries : P rocess → set Header, SendBuf f er : P rocess → SendQR ]]

(4.15)

The component Servers indicates the involved DNS servers, whereas that the components T rustedServers and ServerKeys represent set of publicly known trusted servers

92

A formal specification of the DNSSEC model

and the set of keys for every server, respectively. The delegations issued by each server and the fathers of a server are indicated by the components Delegations and P arents. The components viewAuth and viewCache are used to represent the authoritative view and the cache view of every server, P endingQueries is a function that maps a server with the expected answers to the already performed queries, and SendBuf f er is a buffer with the corresponding pending submissions for each server. Valid state We define a notion of valid state that captures essential security properties of the system, and more particularly of the DNSSEC specification provided by the “DNSSEC protocol document set”, which refers to the three documents that form the core of the DNS security extensions: “DNS Security Introduction and Requirements” [47], “Resource Records for DNS Security Extensions” [48], and “Protocol Modifications for the DNS Security Extensions” [49]. We say that the predicate V alid holds on state s if s satisfies the following properties: 1. “Every RRset within the view of a server must be signed by its corresponding RRSIG signature”, which can be stated formally as follows, def

RR integrity s =

∀ srv : P rocess, srv ∈ s.Servers → signedV iew s srv (s.viewAuth srv) ∧ signedV iew s srv (s.viewCache srv)

(4.16)

where signedV iew verifies that every RRset of a sever’s zone is correctly signed by its corresponding RRSIG. 2. “Every server’s Zone keys must be signed by its corresponding Key Signing Key (KSK)”, def

ZSK integrity s =

∀ (srv : P rocess), srv ∈ s.Servers → verif ySign (s.ServerKeys srv).ksk.key {(s.ServerKeys srv).ksk.key} ∪ {(s.ServerKeys srv).zsk.key} (s.ServerKeys srv).zsk.ksign

(4.17)

where verif ySign checks that the RRSIG of a RRset has been effectively generated from a given key. 3. “Every server must be publicly known as trusted, or be verified by the corresponding digest in its father’s zone”, def

KSK integrity s =

∀ (srv : P rocess), srvH ∈ s.T rustedServers ∨ checkDigest s srvH (s.P arents srvH).rrDS.RR′ (4.18)

where checkDigest validates if a DS record in the Parent zone really contains a digest of its Child keys.

4.2 Formalization of the DNSSEC model

93

We thus formally define the V alid predicate over State as the conjunction of the previous validity conditions: def Valid s = RR integrity s ∧ (4.19) ZSK integrity s ∧ KSK integrity s

4.2.2

Events

The working of the system is modelled in terms of the execution of events. An event is understood as an action that transforms the state of the system. Next we present the syntax, and specify the semantics and execution of events. Syntax The syntax and signature for every event relevant to our abstract model of DNSSEC is formalized by the non-recursive inductive set Event: Event

def

= |

Add Server : P rocess → keys → DSpDB → DSp → DbM ap → DbM ap → Event | Delete Server : P rocess → Event | Add RRset : P rocess → Idx → RRset → Event | Delete RRset : P rocess → Idx → Event | Server ZSK rollover : P rocess → RR → Event | Server KSK rollover : P rocess → RR → RR → Event | Add T rustedServer : P rocess → Event | Del T rustedServer : P rocess → Event | Send Query : P rocess → P rocess → DN SM essage → Event | Receive Query : P rocess → P rocess → DN SM essage → Event | Receive Response : P rocess → P rocess → DN SM essage → Event | Send P endingQ : P rocess → Event | Send P endingR : P rocess → Event | RR T imeOut : P rocess → Idx → RR → Event (4.20) A brief description of each event is shown in Table 4.1. In the next section we present the formal semantics for a small subset of these events. Semantics The behavior of the events is specified by their pre- and postconditions, which are given by the predicates P re and P ost respectively, P re : State → Event → P rop P ost : State → State → Event → P rop

(4.21)

Preconditions are defined in terms of the system state while postconditions are defined in terms of the before and after states. Due to space constraints, we only present here the formal specification of three distinguished events, namely, Server ZSK rollover, Receive Response and RR T imeOut. In the specification of the postconditions we

94

A formal specification of the DNSSEC model Table 4.1: DNSSEC model events

Name Add Server Delete Server Add RRset Delete RRset Server ZSK rollover Server KSK rollover Add T rustedServer Del T rustedServer Send Query Receive Response Send P endingQ Send P endingR RR T imeOut

Description Creates a new server in the domain name system Deletes a server from the domain name system Inserts a new RRset in the authoritative view of a given server Deletes a RRset from the authoritative view of a given server Performs a rollover of the zone key of a given server Performs a rollover of the KSK of a given server Indicates the reliable publication of a server known as trusted Indicates that a server should no longer be consider as trusted Sends a specific query to a given server Receives a response from a given server Sends pending queries Sends pending responses Indicates that the TTL of a given RRset has expired

have omitted all fields of the state that remain invariant when executing the corresponding event. The names of the auxiliary functions and predicates used should be self-explanatory. • Server ZSK rollover srv rrzsk This event performs the rollover of the ZSK key for a specific server srv. For this event to take place it is needed that srv exists in the system and rrzsk should be a ZSK key, that is to say,a key which is identified as a Zone Signing key by the information of its RRDATA. def

Pre s (Server ZSK rollover srv rrzsk) = isServer s srv ∧ isZSK rrzsk

(4.22)

def

Pos s s’ (Server ZSK rollover srv rrzsk) = (s′ .ServerKeys srv).zsk.key = rrzsk ∧ (s′ .ServerKeys srv).zsk.ksign = sign (s.ServerKeys srv).ksk.key ({(s.ServerKeys srv).ksk.key} ∪ {rrzsk}) ∧ ∀ (i : Idx), (s′ .viewAuth srv i).RRsign = sign rrzsk (s.viewAuth srv i).RRset′

(4.23)

The postcondition states that when this event executes successfully the key rrzsk is stored as the new ZSK, its signature is calculated, using sign, and stored as its corresponding RRSIG. In addition to that, all the signatures for the resource records within the authoritative view of the server srv are re-calculated. • Receive Response srv from srv to m This event models the processing of an answer message m. For this operation to succeed, servers srv f rom and srv to must belong to the set of servers of the system and the received response should be an answer to a previously submitted query delivered by the server srv to.

4.2 Formalization of the DNSSEC model

95

def

Pre s (Receive Response srv from srv to m) = isServer s srv f rom ∧ isServer s srv to ∧ m.Hdr ∈ (s.P endingQueries srv to)

(4.24)

def

Pos s s’ (Receive Response srv from srv to m) = if (nodata newref er m) (s′ .SendBuf f er srv to).SendQ = (s.SendBuf f er srv to).SendQ ∪ {(makeResp srv to (next m.Additional.RRset′ ) m)} elseif (verif ySign (s.ServerKeys srv to).zsk.key m.Ans.RRset′ m.Ans.RRsign) ((s′ .viewCache srv to) m.Q).RRset′ = m.Ans.RRset′ ∧ ((s′ .viewCache srv to) m.Q).RRsign = m.Ans.RRsign ∧ s′ .P endingQueries srv to = (s.P endingQueries srv to) \ {m.Hdr} (4.25) The postcondition establishes the conditions required to be satisfied for a given resource record to be accepted and allocated in the cache view of a server. In the case that the received answer does not contain information in its auth section, but it does contain a closer server to refer, then the message will be re-sent to that server. On the contrary, if the received message contains data in its auth section, and the received RRsets as well as their corresponding received signatures are verified by the sender zsk key, then the RRsets and their signatures are added to the server’s cache view and, as the query has been successfully replied, the message is removed from the PendingQueries set. It should be noticed that a correct execution of this event would prevent a cache poisoning like the one discussed in Section 4.1.1. In Section 4.3 we will skecth the proof that the state remains valid after the execution of this event. • RR TimeOut srv i rr Runs on the expiration of the TTL of a resource record rr for a given server srv. The operation precondition will be verified when srv belongs to the set of servers of the system and the TTL of rr has expired due to timeout of either the resource record or its corresponding signature. def

Pre s (RR TimeOut srv i rr) = isServer s srv ∧ isT imeout s srv i rr ∧ (rr ∈ (s.viewCache srv i).RRset′ ∨ rr ∈ (s.viewAuth srv i).RRset′ ) def

(4.26)

Pos s s’ (RR TimeOut srv i rr) = if (rr ∈ (s.viewAuth srv i).RRset′ ) ((s′ .viewAuth srv) i).RRset′ = ((s.viewAuth srv) i).RRset′ \ {rr} ∧ ((s′ .viewAuth srv) i).RRsign = sign (s.ServerKeys srv).zsk.key ((s′ .viewAuth srv) i).RRset′ else ((s′ .viewCache srv) i).RRset′ = ((s.viewCache srv) i).RRset′ {rr} ∧ ((s′ .viewCache srv) i).RRsign = sign (s.ServerKeys srv).zsk.key ((s′ .viewCache srv) i).RRset′ (4.27)

96

A formal specification of the DNSSEC model The postcondition of RR T imeOut states that the expired resource record and its signature are removed from the correponding authoritative or cache view.

Errors When an event is executed and a precondition is not valid, the system answers with a corresponding error code. These error codes are defined by the following enumerated type: def ErrorCode = server not exists | invalid zsk | query not asked (4.28) | rrset not exists | rr not timeout | ... Table 4.2 specifies the error code associated to unverified preconditions of the three events whose semantics was presented in Section 4.2.2. In our model this is formalized as a relation ErrorM sg between valid states of the system and error messages. Table 4.2: Error messages Server ZSK rollover srv rrzsk ¬ isServer s srv server not exists ¬ isZSK rrzsk invalid zsk Receive Response srv from srv to m ¬ isServer s srv f rom ∨ ¬ isServer s srv to server not exists ¬ m.Hdr ∈ s.P endingQueries srv to query not asked RR TimeOut srv i rr ¬ isServer s srv server not exists ′ rr ∈ / (s.viewCache srv i).RRset ∨ rr ∈ / (s.viewAuth srv i).RRset′ rrset not exists ¬isT imeout s srv i rr rr not timeout

One-step execution Executing an event e over a state s produces a new state s′ and a corresponding answer e/ans ′ r (denoted s ֒− −−→ s ), where the relation between the former state and the new one is given by the postcondition relationship P ost. P re s e

P ost s s′ e

e/ok ′ s ֒− −−→ s

¬ P re s e

exec pre

∃ ec : ErrorCode, ErrorM sg s e ec ∧ ans = error ec e/ans s ֒− −−→ s

(4.29)

exec npre

(4.30)

4.3 Verification of security properties

97

If the precondition P re s e is satisfied, then the resulting state s′ and the corresponding answer ans are the ones described by the relation exec. However, if P re s e is not satisfied, then the state s remains unchanged and the system answer is the error message determined by the relation ErrorM sg. Formally, the possible answers of the system are defined by the following type: answer

def

= |

ok : answer error : ErrorCode → answer

(4.31)

where ok is the answer resulting from a successful execution of an event.

4.3

Verification of security properties

In this section we discuss two relevant properties of the model that have been formally stated and verified. We first concentrate on the proof that one-step execution preserves the validity of states. We sketch the proof of this property and show that the notion of valid state embodies necessary conditions to prove the objective security properties. Then, we show, formally, that to have this invariance result is not enough to ensure consistency of the chain of trust.

4.3.1

An invariant of one-step execution

A one-step execution invariant is a property that if it holds for the state before the execution of any event it remains valid for the state resulting from that execution. We show that the validity of the model state, as defined in Section 4.2.1, is a one-step invariant of our specification. Proposition 4.3.1. For any s s′ : State, ans : answer and e : Event, if V alid s and e/ans ′ ′ s ֒− −−→ s hold, then V alid s also holds. (4.32) e/ans Proof. The proof of this proposition proceeds by case analysis on the execution s ֒− −−→ s′ : i) if the precondition P re s e is not satisfied, then the specification of executions establishes that the state s is not modified and that s = s′ , so s′ is trivially valid because s is valid, ii) otherwise, P ost s s′ r e must hold and then we proceed by case analysis on the event e. We shall here discuss in detail the proof argument for the case the event e is of the form Receive Response srv f rom srv to m. The complete formal proof of this and each of the remaining cases can be found in the accompanying formalization. The proof starts by analyzing the type of response obtained. In the case the answer has authoritative content we proceed by checking whether the signature of the received RRset is verified. Then we have two possible cases:

1. If the received answer does not contain data, then, as specified in its postconditions, when executing Receive Response, the component viewCache of the state s′ will remain invariant, moreover the only component modified in s′ will be SendBuf f er which is not verified in the validity property, so we can state that s′ remain valid.

98

A formal specification of the DNSSEC model

2. On the contrary, if the answer contains data in its Auth section, we must analyze if the received RRset is verified by its corresponding signature RRsig: i) if the signature is not verified, the received answer will be discarded and no component of the state will be modified, so the property will remain satisfied for s′ , ii) if the signature is correctly verified and so the RRset is added to the viewCache view of the server srv to, the resultant state s′ will stay valid, as viewCache is the only modified component and, by adding verified RRset and RR signature, it remains correctly signed.

Observe that the instantiation of Proposition 4.3.1 to the case in which the executed event is Receive Response reflects the (informal) security requirement that no man-inthe-middle can masquerade as a trusted entity in order to provide answers with fake resource records. By proving the correct execution of this event, we are providing (formally verified) evidence that if a classical DNS attacker sends a malicious resource record to a secure server that record shall be discarded as it will not be verified by its corresponding signature, which, in turn, has been created by its father within the chain of trust. This is clearly a security improvement regarding DNS, because the reception mechanism of this system is what makes it a potential victim of cache poisoning attacks.

4.3.2

Compromising the chain of trust

It is important to notice that the conditions specified for a state of our model to be a valid one constitute an almost straightforward interpretation of the (security) recommendations laid down in relevant DNSSEC RFCs like, for instance, [47, 48, 49]. We can show, however, that despite the validity of states is preserved by execution this does not necessarily guarantee that the chain of trust remains valid. In particular, analyzing the conditions required for the rollover of a zone key, which in our model is specified as part of the semantics of the event Server ZSK rollover, we have detected a small inconsistency concerning the data of the system. Namely, for a resource record to be discarded from a view either authoritative or cache, it is only required that its corresponding TTL is recached. Now, a rollover of a zone key might be needed to be executed, for instance, in the case the server’s zone is compromised, even if the TTL of the key has not expired. Consequently, every RRset within the zone must be signed to generate the new RRSIG records. Therefore, every DNS server that contains these, just re-signed, records inside its cache view will become inconsistent. This issue could not be detected during the verification of the invariance of the event Server ZSK rollover because the specification of the DNSSEC protocol mandates for a resource record to be discarded from the zone file only in if its TTL has expired. This problem was independently discovered and pointed out by Bau and Mitchell in [124], where they study the security properties of a restricted model of DNSSEC using model checking techniques. We adhere to their recommendation that the resolver logic specified in the corresponding RFC [47] should be strengthened so as to prevent the identified problem. We now proceed to provide a formal argument that the (incomplete) current specification of the zone key rollover procedure may facilitate the occurrence of inconsistent chains of trust. DNSSEC provides no mechanisms to validate a cached resource record

4.3 Verification of security properties

99

faced to another signature in its attestation chain. We shall show that if for some reason it is performed a rollover of the zone key for a given server (srv) which has not yet arrived to its TTL a resolver (srvOldCache) caching the corresponding resource record (identified by its index a) signed with the old zone key may keep cached that resource record for the entire signature validity period, turning the previous valid chain of trust into an inconsistent one. We assume that the server srvOldCache has information in its cache view corresponding to the authoritative zone of the server srv, i.e. for an index a: ((s.viewAuth srv) a).RRsign = ((s.viewCache srvOldCache) a).RRsign

(4.33)

Now, let us consider a correct execution of the event Server ZSK rollover. Thus, for arbitrary s′ : State and rrzsk : RR, the following predicates are verified: V alid s

(4.34)

P re s (Server ZSK rollover srv rrzsk) ′

P ost s s (Server ZSK rollover srv rrzsk)

(4.35) (4.36)

Therefore, when performing a rollover of the zone key zsk for the server srv, the following inconsistency will take place: ((s′ .viewAuth srv) a).RRsign 6= ((s′ .viewCache srvOldCache) a).RRsign

(4.37)

The proof will be performed by reduction to the absurd, assuming that the mentioned inconsistency in fact does not occur, considering as valid: ((s′ .viewAuth srv) a).RRsign = ((s′ .viewCache srvOldCache) a).RRsign

(4.38)

We know from the postcondition of the event Server ZSK rollover that: (s′ .ServerKeys srv).zsk.key = rrzsk

(4.39)



∀ i : Idx, ((s .viewAuth srv) i).RRsign = sign rrzsk (s.viewAuth srv i).RRset′ (4.40) s′ .viewCache = s.viewCache

(4.41)

Rewriting (4.41) and then (4.33) in (4.38) we have that: ((s′ .viewAuth srv) a).RRsign = ((s.viewAuth srv) a).RRsign

(4.42)

Now, considering that if a given record RR is signed with two different zsk keys, different RRSIG records are obtained, we arrive to: ((s′ .viewAuth srv) a).RRsign 6= ((s.viewAuth srv) a).RRsign

(4.43)

The propositions (4.42) and (4.43) reflects an absurd. This allow us to conclude that (4.37) holds.

100

4.4

A formal specification of the DNSSEC model

Summary

The Domain Name System Security Extensions (DNSSEC) is a suite of specifications that provide origin authentication and integrity assurance services for DNS data. In particular, DNSSEC was designed to protect resolvers from forged DNS data, such as the one generated by DNS cache poisoning. In this chapter we have presented a minimalistic specification of a DNSSEC model which provides the grounds needed to formally state and verify security properties concerning the chain of trust of the DNSSEC tree. The model, which has been formalized and verified using the Coq proof assistant, specifies an abstract formulation of the behavior of the protocol and the corresponding security-related events, where security goals can be given a formal treatment.

Chapter 5

Conclusion and future work Our work shows that it is feasible to analyze formally models of safety-critical applications, and is part of a trend to build and analyze realistic models of mobile devices, operating systems, hypervisors, and other kinds of complex critical systems. The Coq proof assistant is a useful tool when sophisticated algorithms and specifications are involved and also as a general framework to design special platforms for the verification of critical systems. We present now the conclusions and describe future work for each of the areas of safety-critical applications addressed in this work.

5.1

Security models for mobile devices

We have provided the first verifiable formalization of the MIDP 2.0 security model, according to [65], and have also constructed the proofs of several important properties that should be satisfied by any implementation that fulfills its specification. Our formalization is detailed enough to study how other mechanisms interact with the security model, for instance, the interference between the security rules that control access to the device resources, and mechanisms such as application installation. We have also proposed a refinement methodology that might be used to obtain a sound executable prototype of the security model. Moreover, a high-level formalization of the JME-MIDP 2.0 Access Control module also has been developed. This specification assumes that the security policy of the device is static, that there exists at most one active suite in every state of the device, and that all the methods of a suite share the same protection domain. The obtained model, however, can be easily extended so as to consider multiple active suites as well as to specify a finer relation allowing to express that a method is bound to a protection domain, and then that two different methods of the same suite may be bound to different protection domains. With the objective of obtaining a certified executable algorithm of the access controller, the high-level specification has been refined into a executable equivalent one, and an algorithm has been constructed that is proved to satisfy that latter specification. The formal specification and the obtained derived code of the algorithm contribute to the understanding of the working of such an important component of the security model of that platform. Additionally, it has also been presented an extension of the formalization of MIDP 2.0 security model which considers the changes introduced in version 3.0 of MIDP. In 101

102

Conclusion and future work

particular, a new dimension of security is represented: the security at the application level. This extended specification preserves the security properties verified for MIDP 2.0 and enables the research of new security properties for MIDP 3.0. In this way, the formalization is updated keeping its validity as a useful tool to analyze the security model of MIDP at both, the platform and the application level. Some weaknesses introduced by the informal specification of version of MIDP 3.0 are also discussed, in particular those regarding the interplaying of the mechanisms for enforcing security at the application and at the platform level. They reflect potential weaknesses of implementations which satisfy the informal specification [24]. Finally, we have also built a framework that provides a uniform setting to define and formally analyze access control models which incorporate interactive permission requesting/granting mechanisms. In particular, the work presented here has focused on two distinguished permission models: the one defined by version 2.0 (and 3.0) of MIDP and the one defined by Besson et al. in [29]. A characterization of both models in terms of a formal definition of grant policy has also been provided. Another kind of permission policies can also be expressed in the framework. In particular, it can be adapted to introduce a notion of permission revocation, a permission mode not considered in MIDP. A revoke can be modeled in the permission overwriting approach, for instance, by assigning a zero multiplicity to a resource type. In the accumulative approach, revocation might be modeled using negative multiplicities. To introduce revocations, in turn, enables, without further changes to the framework, to model a notion of permission scope. One such scope would be grasped as the session interval delimited by an activation and a revocation of that permission. The formal development is about 20kLOC of Coq (see Figure 5.1). MIDP 2.0 formalization AC for MIDP 2.0 MIDP 3.0 formalization Framework Total

3.2k 8k 5.6k 3.2k 20k

Figure 5.1: LOC of Coq development – security for mobile devices Further work is the study and specification, using the formal setting provided by the framework, of algorithms for enforcing the security policies derived from different sort of permission models to control the access to sensitive resources of the devices. Moreover, one main objective is to extend the framework so as to be able to construct certified prototypes from the formal definitions of those algorithms. Finally, an exhaustive formal comparison between both JME-MIDP and Android security models is proposed as further work. We have begun developing a formal specification of the Android security model in Coq, considering [88, 125, 126, 127, 128, 129, 130, 131, 132, 133], which focuses on the analysis of the permission system in general, and in the scheme of permission re-delegation, in particular [134].

5.2

A model of virtualization

We have presented a formalization of an idealized model of a hypervisor —initially introduced in [94]— and established within this model important security properties

5.3 The DNSSEC model

103

that are expected from virtualization platforms. In particular, our verification effort showed that the model is adequate to reason about safety properties (read and write isolation), 2-safety properties (OS isolation), and liveness properties (availability). The basic formal development is about 20kLOC of Coq (see Figure 5.2), including proofs, and forms a suitable basis for reasoning about hypervisors. Model and basic lemmas Valid state invariance Read and write isolation OS Isolation and lemmas Traces, safety and availability Total

4.8k 8.0k 0.6k 6.0k 1.0k 20.4k

Figure 5.2: LOC of Coq development – model of virtualization Additionally, we have enhanced the idealized model of virtualization considered in [94] with an explicit treatment of errors, and showed that OS isolation is preserved in this setting. Moreover we have implemented an executable specification that realizes the axiomatic semantics used in [94], for the model extended with cache and TLB. Finally, we have proved that in our idealized model the virtualization platform is transparent for guest operating systems in the sense that they cannot tell whether they execute alone or together with other systems on the platform. The formal development for this extension is about 15 kLOC of Coq, where 8k correspond to the verified executable specification and 7k to the proofs on the extended model with errors. There are several directions for future work: one immediate direction is to complete our formalization with a proof of correctness of the hypervisor, as in the Hyper-V verification project. We also intend to enrich our model with devices and interrupts, and to give sufficient conditions for isolation properties to remain valid in this extended setting. Another immediate direction is to prove availability properties on an implementation of the hypervisor. Additionally, we intend to implement and analyze alternative executable semantics for different models of cache and policies. Finally, it is of interest to understand how to adapt our models to other virtualization paradigms such as full virtualization and microvisors.

5.3

The DNSSEC model

We have developed an abstract model of DNS that incorporates the security extensions defined by the DNSSEC specification suite. We have established and proved the security properties required to be satisfied by the operational behaviour of an implementation of this version of DNS, which is specified in our model by an abstract state and the events that represent the working of the system. In particular, this result provides a formal means to assess the effectiveness of a (correct) deployment of the security requirements specified by DNSSEC to prevent cache poisoning attacks. In addition to that, we have identified an exploitable vulnerability that, according to our understanding, emerges as a flaw of the specification in question. As shown in Section 4.3, the conditions that must be verified in the case a rollover of a zone key must be performed, as specified in the RFC 4033 [47], do not suffice to ensure the validity of

104

Conclusion and future work

the chain of trust. We have sketched in this article a formal proof that shows how the chain of trust can be compromised if only the expiration time of a key is considered as the cause to perform the rollover procedure. This property is established as a lemma in the formalization available in http://www.fing.edu.uy/inco/grupos/gsi/sources/ dnssec. The formal development is about 5kLOC of Coq (see Figure 5.3), including proofs, and forms a suitable basis for reasoning about DNSSEC. Model and basic lemmas Valid state invariance Proof of inconsistency Total

2k 3k 0.2k 5.2k

Figure 5.3: LOC of Coq development – DNSSEC formalization There are several directions for future work. One immediate direction is to extend our specification and to establish results concerning the impact of introducing resource records of type NSEC. An NSEC record points to the next valid name in the zone file and is used to provide proof of non-existence of names within a zone. Through repeated queries that return NSEC records it is possible to retrieve all of the names in the zone, a process commonly called walking the zone. This side effect of the NSEC architecture subverts policies frequently implemented by zone owners which forbid zone transfers by arbitrary clients. We see as an interesting and challenging task to specify ways of preventing zone walking by constructing NSEC records that cover fewer names [135].

Bibliography [1] Housley, R., Ford, W., Polk, W., Solo, D.: Internet X.509 Public Key Infrastructure Certificate and CRL Profile (1999) [2] Gollmann, D.: Computer Security (3. ed.). Wiley (2011) [3] Mclean, J.: Security models. In: Encyclopedia of Software Engineering, Wiley and Sons (1994) [4] Schneider, F.B.: Enforceable security policies. ACM Trans. Inf. Syst. Secur. 3(1) (2000) 30–50 [5] Anderson, J.P.: Computer Security technology planning study. Technical report, Deputy for Command and Management System, USA (1972) [6] Bell, D.E., LaPadula, L.J.: Secure computer systems: Mathematical foundations. Technical Report MTR-2547, Vol. 1, MITRE Corp., Bedford, MA (1973) [7] Department of Defense: Department of Defense Trusted Computer System Evaluation Criteria. (1985) DOD 5200.28-STD (supersedes CSC-STD-001-83). [8] CCMB-2012: Common Criteria for Information Technology Security Evaluation, Part 1: Introduction and General Model, Version 3.1. (2012) [9] Chetali, B., Nguyen, Q.H.: About the world-first smart card certificate with eal7 formal assurances. Slides 9th ICCC, Jeju, Korea (2008) Available at: www. commoncriteriaportal.org/iccc/9iccc/pdf/B2404.pdf. [10] Betarte, G., Gim´enez, E., Loiseaux, C., Chetali, B.: FORMAVIE: Formal Modeling and Verification of the Java Card 2.1.1 Security Architecture. In: Proceedings of eSmart’02. (2002) [11] Andronick, J.: Mod´elisation et V´erification Formelles de Syst`emes Embarqu´es dans les Cartes `a Microprocesseur – Plate-Forme Java Card et Syst`eme d’Exploitation. PhD thesis, Universit´e Paris-Sud (2006) [12] The Coq Development Team: The Coq Proof Assistant Reference Manual – Version V8.4. (2012) [13] Bertot, Y., Cast´eran, P.: Interactive Theorem Proving and Program Development. Coq’Art: The Calculus of Inductive Constructions. Texts in Theoretical Computer Science. Springer-Verlag (2004) 105

106

BIBLIOGRAPHY

[14] Cohen, E.: Validating the microsoft hypervisor. In Misra, J., Nipkow, T., Sekerinski, E., eds.: FM ’06. Volume 4085 of LNCS. Springer (2006) 81–81 [15] Leinenbach, D., Santen, T.: Verifying the microsoft hyper-v hypervisor with vcc. In Cavalcanti, A., Dams, D., eds.: FM 2009. Volume 5850 of LNCS., Springer (2009) 806–809 [16] Klein, G., Andronick, J., Elphinstone, K., Heiser, G., Cock, D., Derrin, P., Elkaduwe, D., Engelhardt, K., Kolanski, R., Norrish, M., Sewell, T., Tuch, H., Winwood, S.: seL4: Formal verification of an OS kernel. Communications of the ACM (CACM) 53(6) (2010) 107–115 [17] Terauchi, T., Aiken, A.: Secure information flow as a safety problem. In Hankin, C., Siveroni, I., eds.: Proceedings of SAS’05. Volume 3672 of Lecture Notes in Computer Science., Springer-Verlag (2005) 352–367 [18] Clarkson, M.R., Schneider, F.B.: Hyperproperties. Journal of Computer Security 18(6) (2010) 1157–1210 [19] Grupo de Seguridad Inform´atica: Projects: STEVE, ReSeCo, VirtualCert, and VirtualCert - phase II (2013) Available at: www.fing.edu.uy/inco/grupos/gsi. [20] Sun Microsystems, Inc.: Java Platform Micro Edition. (Last accessed: March 2013) Available at: //java.sun.com/javame/index.jsp. [21] JSR 139 Expert Group: Connected Limited Device Configuration. Version 1.1. Sun Microsystems, Inc. and Motorola, Inc. (2003) [22] JSR 37 Expert Group: Mobile information device profile for java 2 micro edition. version 1.0. Technical report, Sun Microsystems, Inc. (2000) [23] JSR 118 Expert Group: Mobile information device profile for java 2 micro edition. version 2.0. Technical report, Sun Microsystems, Inc. and Motorola, Inc. (2002) [24] JSR 271 Expert Group: Mobile information device profile for java micro edition. version 3.0. Technical report, Motorola, Inc. (2009) [25] Open Handset Alliance: Android project. (Last accessed: March 2013) Available at: //source.android.com/. [26] Enck, W., Ongtang, M., McDaniel, P.: Understanding android security. IEEE Security and Privacy 7 (2009) 50–57 [27] Shabtai, A., Fledel, Y., Kanonov, U., Elovici, Y., Dolev, S., Glezer, C.: Google android: A comprehensive security assessment. IEEE Security and Privacy 8(2) (2010) 35–44 [28] Wallach, D.S., Felten, E.W.: Understanding java stack inspection. In: Proceedings of the 1998 IEEE Symposium on Security and Privacy. (1998) 52–63 [29] Besson, F., Dufay, G., Jensen, T.: A formal model of access control for mobile interactive devices. In: 11th European Symposium on Research in Computer Security (ESORICS’06), LNCS 4189. (2006) 110–126

BIBLIOGRAPHY [30] Garfinkel, T., Warfield, A.: What virtualization can do for security. ;login: The USENIX Magazine 32 (2007) [31] Cohen, E., Paul, W.J., Schmaltz, S.: Theory of multi core hypervisor verification. In van Emde Boas, P., Groen, F.C.A., Italiano, G.F., Nawrocki, J.R., Sack, H., eds.: SOFSEM. Volume 7741 of Lecture Notes in Computer Science., Springer (2013) 1–27 [32] Sewell, T., Winwood, S., Gammie, P., Murray, T., Andronick, J., Klein, G.: seL4 enforces integrity. In: 2nd Conference on Interactive Theorem Proving, Nijmegen, The Netherlands (2011) [33] Murray, T., Matichuk, D., Brassil, M., Gammie, P., Bourke, T., Seefried, S., Lewis, C., G., X., Klein, G.: sel4: from general purpose to a proof of information flow enforcement. In: IEEE Symposium on Security and Privacy. (2013) 415–429 [34] Oheimb, D.v.: Information flow control revisited: Noninfluence = Noninterference + Nonleakage. In Samarati, P., Ryan, P., Gollmann, D., Molva, R., eds.: Computer Security – ESORICS 2004. Volume 3193 of LNCS., Springer (2004) 225–243 [35] Mockapetris, P.: Domain names - concepts and facilities. RFC 1034 (Standard) (1987) Updated by RFCs 1101, 1183, 1348, 1876, 1982, 2065, 2181, 2308, 2535, 4033, 4034, 4035, 4343, 4035, 4592, 5936. [36] Mockapetris, P.: Domain names - implementation and specification. RFC 1035 (Standard) (1987) Updated by RFCs 1101, 1183, 1348, 1876, 1982, 1995, 1996, 2065, 2136, 2181, 2137, 2308, 2535, 2845, 3425, 3658, 4033, 4034, 4035, 4343, 5936, 5966. [37] Bellovin, S.M.: Using the Domain Name System for System Break-ins. In: Proceedings of the 5th conference on USENIX UNIX Security Symposium - Volume 5, Berkeley, CA, USA, USENIX Association (1995) 18–18 [38] Bellovin, S.M.: Security problems in the tcp/ip protocol suite. SIGCOMM Comput. Commun. Rev. 19 (1989) 32–48 [39] Cert/c, C.: Cert advisory ca-2001-02 multiple vulnerabilities in bind (2001) Available at: http://www.cert.org/advisories/CA-2001-02.html. [40] Gavron, E.: RFC 1535: A security problem and proposed correction with widely deployed DNS software (1993) Status: INFORMATIONAL. [41] Purdue University. Dept. of Computer Sciences and Schuba, C.L.: Addressing weaknesses in the domain name system protocol. Number n.o 28 in CSD-TR / Computer Sciences Department, Purdue University. Purdue University, Dept. of Computer Sciences (1994) [42] Vixie, P.: Dns and bind security issues. In: Proceedings of the 5th conference on USENIX UNIX Security Symposium - Volume 5, Berkeley, CA, USA, USENIX Association (1995) 19–19

107

108

BIBLIOGRAPHY

[43] Davidowicz, D.: Domain Name System (DNS) Security (1999) Available at: // compsec101.antibozo.net/papers/dnssec/dnssec.html. [44] Atkins, D., Austein, R.: Threat analysis of the domain name system. In: DNS. RFC 3833, Internet Engineering Task Force. (2004) [45] Herzberg, A., Shulman, H.: (2012)

Security of patched dns. CoRR abs/1205.5190

[46] Son, S., Shmatikov, V.: The hitchhiker’s guide to dns cache poisoning. In Jajodia, S., Zhou, J., eds.: SecureComm. Volume 50 of Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering., Springer (2010) 466–483 [47] Arends, R., Austein, R., Larson, M., Massey, D., Rose, S.: DNS Security Introduction and Requirements. RFC 4033 (Proposed Standard) (2005) Updated by RFC 6014. [48] Arends, R., Austein, R., Larson, M., Massey, D., Rose, S.: Resource Records for the DNS Security Extensions. RFC 4034 (Proposed Standard) (2005) Updated by RFCs 4470, 6014. [49] Arends, R., Austein, R., Larson, M., Massey, D., Rose, S.: Protocol Modifications for the DNS Security Extensions. RFC 4035 (Proposed Standard) (2005) Updated by RFCs 4470, 6014. [50] T. I. I. Foundation: .se top level domain (2011) Available at: www.iis.se/. [51] IANA: Interim trust-anchor repository (2011) Available at: //itar.iana.org/. [52] Internet Research Lab, UCLA CS Department: The secspider dnssec monitoring project (2011) Available at: //secspider.cs.ucla.edu/. [53] Nominet: Nominet dnssec testbed (2011) Available at: www.nominet.org.uk/ registrars/DNSSEC/. [54] P.I.R. PIR: Org Top Level Domain (2011) Available at: www.iana.org/domains. [55] Arends, R., Bellgrim, R., Dalitz, A., Dickinson, J.A., Jansen, J., Lloyd, S., Mekking, M., Morris, S., Post, R., Schaeffer, Y., Schlyter, J., Wallstrm, P.: The opendnssec project (Last accessed: April 2014) Available at: www.test.org/doe/. [56] Yang, H., Osterweil, E., Massey, D., Lu, S., Zhang, L.: Deploying cryptography in internet-scale systems: A case study on dnssec. IEEE Trans. Dependable Sec. Comput. 8(5) (2011) 656–669 [57] Letouzey, P.: A New Extraction for Coq. In Geuvers, H., Wiedijk, F., eds.: Types for Proofs and Programs, Second International Workshop, TYPES 2002, Berg en Dal, The Netherlands, April 24-28, 2002. Volume 2646 of Lecture Notes in Computer Science., Springer-Verlag (2003) [58] Letouzey, P.: Programmation fonctionnelle certifi´ee – L’extraction de programmes dans l’assistant Coq. PhD thesis, Universit´e Paris-Sud (2004)

BIBLIOGRAPHY

109

[59] Leroy, X.: Formal verification of a realistic compiler. Commun. ACM 52 (2009) 107–115 [60] Barthe, G., Gr´egoire, B., Zanella B´eguelin, S.: Formal certification of code-based cryptographic proofs. SIGPLAN Not. 44(1) (2009) 90–101 [61] Coquand, T., Huet, G.: (1988) 95–120

The calculus of constructions. Inf. Comput. 76(2-3)

[62] Coquand, T., Paulin-Mohring, C.: Inductively defined types. In Martin-L¨ of, P., Mints, G., eds.: COLOG-88, International Conference on Computer Logic, Tallinn, USSR, December 1988. Volume 417 of Lecture Notes in Computer Science., Springer-Verlag (1990) 50–66 [63] Paulin-Mohring, C.: Inductive definitions in the system coq - rules and properties. In Bezem, M., Groote, J.F., eds.: 1st Int. Conf. on Typed Lambda Calculi and Applications. Volume 664 of LNCS., Springer-Verlag (1993) 328–345 [64] Housley, R., Ford, W., Polk, W., Solo, D.: Internet X.509 Public Key Infrastructure Certificate and CRL Profile (1999) [65] Zanella B´eguelin, S., Betarte, G., Luna, C.: A formal specification of the MIDP 2.0 security model. In: 4th International workshop on Formal Aspects in Security and Trust, FAST 2006. Volume 4691 of Lecture Notes in Computer Science., Springer (2006) 220–234 [66] Roushani Oskui, R., Betarte, G., Luna, C.: A certified access controller for jmemidp 2.0 enabled mobile devices. In: 2009 International Conference of the Chilean Computer Science Society, SCCC 2009, Los Alamitos, CA, USA, IEEE Computer Society (2009) 51–58 [67] Mazeikis, G., Betarte, G., Luna, C.: Formal specification and analysis of the MIDP 3.0 security model. In: 2009 International Conference of the Chilean Computer Science Society, SCCC 2009, Los Alamitos, CA, USA, IEEE Computer Society (2009) 59–66 [68] Crespo, J., Betarte, G., Luna, C.: A framework for the analysis of access control models for interactive mobile devices. In: Types for Proofs and Programs, International Conference, TYPES 2008. Volume 5497 of Lecture Notes in Computer Science., Springer (2009) 49–63 [69] Spivey, J.M.: The Z notation: a reference manual. Prentice-Hall, Inc., Upper Saddle River, NJ, USA (1989) [70] Back, R.J., Wright, J.: Refinement Calculus: A Systematic Introduction (Texts in Computer Science). Springer (1998) [71] Morgan, C.: Programming from specifications. Prentice-Hall, Inc., Upper Saddle River, NJ, USA (1990) [72] Roushani Oskui, R.: Especificaci´on formal en coq del m´odulo de control de acceso de MIDP 2.0 para dispositivos m´oviles interactivos. Technical report, Master’s thesis, Universidad Nacional de Rosario (2008)

110

BIBLIOGRAPHY

[73] Mazeikis, G., Luna, C.: Autorizaci´on de acceso en midp 3.0. In: V Congreso Iberoamericano de Seguridad Inform´atica, CIBSI 2009. (2009) 283–297 [74] Forte, J.: Formalizaci´on del protocolo de comunicaci´on entre aplicaciones para dispositivos m´oviles java. Master’s thesis, FCEIA, Universidad Nacional de Rosario, Argentina (2013) [75] Prince, C.: An´alisis formal de la instalaci´on de aplicaciones en midp 3.0. Master’s thesis, FCEIA, Universidad Nacional de Rosario, Argentina (2014) [76] Jensen, T., M´etayer, D.L., Thorn, T.: Verification of control flow based security properties. In: Proc. of the 20th IEEE Symp. on Security and Privacy, New York: IEEE Computer Society (1999) 89–103 [77] Besson, F., Jensen, T., Le M´etayer, D., Thorn, T.: Model checking security properties of control flow graphs. J. Comput. Secur. 9 (2001) 217–250 [78] Bartoletti, M., Degano, P., Ferrari, G.: Static analysis for stack inspection. In: Proceedings of International Workshop on Concurrency and Coordination, Electronic Notes in Theoretical Computer Science. Volume 54., Elsevier (2001) 2001 [79] Kolsi, O.: Midp 2.0 security enhancements. In: In Proceedings of the 37th Annual Hawaii International Conference on System Sciences (HICSS04) - Track 9 - Volume 9. (2004) 287–294 [80] Mohamed, M.D., Saleh, M., Talhi, C., Zhioua, S.: Security analysis of wireless java. In: In Proceedings of the 3rd Annual Conference on Privacy, Security and Trust, PST05. (2005) 1–11 [81] Fournet, C., Gordon, A.D.: Stack inspection: Theory and variants. ACM Trans. Program. Lang. Syst. 25 (2003) 360–399 [82] mo Chang, B.: Static check analysis for java stack inspection. ACM SIGPLAN Notices 41 (2006) [83] Besson, F., Dufay, G., Jensen, T., Pichardie, D.: Verifying resource access control on mobile interactive devices. J. Comput. Secur. 18 (2010) 971–998 [84] Shin, W., Kiyomoto, S., Fukushima, K., Tanaka, T.: Towards formal analysis of the permission-based security model for android. In: Proceedings of the 2009 Fifth International Conference on Wireless and Mobile Communications. ICWMC ’09, Washington, DC, USA, IEEE Computer Society (2009) 87–92 [85] Shin, W., Kiyomoto, S., Fukushima, K., Tanaka, T.: A formal model to analyze the permission authorization and enforcement in the Android framework. In: 2010 IEEE 2nd International Conference on Social Computing, Los Alamitos, CA, USA, IEEE Computer Society (2010) 944–951 [86] Open Handset Alliance: Android developers. (Last accessed: April 2014) Available at: developer.android.com/index.html.

BIBLIOGRAPHY [87] Chaudhuri, A.: Language-based security on android. In: Proceedings of the ACM SIGPLAN Fourth Workshop on Programming Languages and Analysis for Security. PLAS ’09, New York, NY, USA, ACM (2009) 1–7 [88] Romano, A., Luna, C.: Descripci´on y an´alisis del modelo de seguridad de Android. Technical Report RT 13-08, PEDECIBA Inform´atica (2013) [89] Igarashi, A., Kobayashi, N.: Resource usage analysis. ACM Trans. Program. Lang. Syst. 27(2) (2005) 264–313 [90] Bartoletti, M., Degano, P., Ferrari, G.L.: History-based access control with local policies. In: In Proceedings of FOSSACS 2005, Springer (2005) 316–332 [91] Chander, A., Espinosa, D., Islam, N., Lee, P., Necula, G.C.: Enforcing resource bounds via static verification of dynamic checks. ACM Trans. Program. Lang. Syst. 29 (2007) [92] Goldberg, R.P.: Survey of virtual machine research. IEEE Computer Magazine 7 (1974) 34–45 [93] Barham, P., Dragovic, B., Fraser, K., Hand, S., Harris, T., Ho, A., Neugebauer, R., Pratt, I., Warfield, A.: Xen and the art of virtualization. In: SOSP ’03: Proceedings of the nineteenth ACM symposium on Operating systems principles, New York, NY, USA, ACM Press (2003) 164–177 [94] Barthe, G., Betarte, G., Campo, J., Luna, C.: Formally verifying isolation and availability in an idealized model of virtualization. In Butler, M., Schulte, W., eds.: Formal Methods. Volume 6664 of LNCS., Springer-Verlag (2011) [95] Barthe, G., Betarte, G., Campo, J.D., Chimento, J.M., Luna, C.: Formally Verified Implementation of an Idealized Model of Virtualization. In Matthes, R., Schubert, A., eds.: 19th International Conference on Types for Proofs and Programs (TYPES 2013). Volume 26 of Leibniz International Proceedings in Informatics (LIPIcs)., Dagstuhl, Germany, Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik (2014) 45–63 [96] Clarke, E., Emerson, E.: Design and synthesis of synchronization skeletons using branching-time temporal logic. In: Logic of Programs, Workshop, London, UK, Springer-Verlag (1982) 52–71 [97] van Leeuwen, J., ed.: Handbook of Theoretical Computer Science, Volume B: Formal Models and Semantics. Elsevier and MIT Press (1990) [98] Luna, C.: Computation tree logic for reactive systems and timed computation tree logic for real time systems. Contributions to the Coq system, Universidad de la Rep´ ublica, Uruguay (2000) [99] Oheimb, D.v., Lotz, V., Walter, G.: Analyzing SLE 88 memory management security using Interacting State Machines. International Journal of Information Security 4(3) (2005) 155–171

111

112

BIBLIOGRAPHY

[100] Banerjee, A., Naumann, D.: Stack-based access control for secure information flow. Journal of Functional Programming 15 (2005) 131–177 Special Issue on Language-Based Security. [101] Rushby, J.M.: Noninterference, Transitivity, and Channel-Control Security Policies. Technical Report CSL-92-02, SRI International (1992) [102] Barthe, G., Betarte, G., Campo, J., Luna, C.: Cache-Leakage Resilient OS Isolation in an Idealized Model of Virtualization. In: IEEE 25th Computer Security Foundations Symposium (CSF). (2012) 186–197 [103] Hwang, J.Y., Suh, S.B., Heo, S.K., Park, C.J., Ryu, J.M., Park, S.Y., Kim, C.R.: Xen on arm: System virtualization using xen hypervisor for arm-based secure mobile phones. In: 5th IEEE Consumer and Communications Networking Conference. (2008) [104] The VirtualCert project: Supporting Coq formalization (2013) Available at: www. fing.edu.uy/inco/grupos/gsi/proyectos/virtualcert.php. [105] Klein, G.: Operating system verification – an overview. S¯adhan¯a 34(1) (2009) 27–69 [106] Shao, Z.: Certified software. Commun. ACM 53(12) (2010) 56–66 [107] Alkassar, E., Hillebrand, M., Paul, W., Petrova, E.: Automated verification of a small hypervisor. In Leavens, G., OHearn, P., Rajamani, S., eds.: Verified Software: Theories, Tools, Experiments. Volume 6217 of LNCS. Springer (2010) 40–54 [108] Alkassar, E., Cohen, E., Hillebrand, M., Kovalev, M., Paul, W.: Verifying shadow page table algorithms. In Bloem, R., Sharygina, N., eds.: Formal Methods in Computer-Aided Design, 10th International Conference (FMCAD’10), Switzerland, IEEE CS (2010) [109] Alkassar, E., Paul, W., Starostin, A., Tsyban, A.: Pervasive verification of an os microkernel: Inline assembly, memory consumption, concurrent devices. In: Third International Conference on Verified Software: Theories, Tools, and Experiments (VSTTE’10). Volume 6217 of LNCS., Edinburgh, Springer (2010) 71–85 [110] Elkaduwe, D., Klein, G., Elphinstone, K.: Verified protection model of the sel4 microkernel. In: Proceedings of the 2nd international conference on Verified Software: Theories, Tools, Experiments. VSTTE ’08, Springer (2008) 99–114 [111] Yang, J., Hawblitzel, C.: Safe to the last instruction: automated verification of a type-safe operating system. Commun. ACM 54(12) (2011) 123–131 [112] Tews, H., Weber, T., Poll, E., Eekelen, M.v.: Formal Nova interface specification. Technical Report ICIS–R08011, Radboud University Nijmegen (2008) Robin deliverable D12. [113] Heitmeyer, C.L., Archer, M., Leonard, E.I., McLean, J.: Formal specification and verification of data separation in a separation kernel for an embedded system.

BIBLIOGRAPHY

113

In: Proceedings of the 13th ACM conference on Computer and communications security. CCS ’06, NY, USA, ACM (2006) 346–355 [114] Greve, D., Wilding, M., Eet, W.M.V.: A separation kernel formal security policy. In: Proc. Fourth International Workshop on the ACL2 Theorem Prover and Its Applications. (2003) [115] Martin, W., White, P., Taylor, F., Goldberg, A.: Formal construction of the mathematically analyzed separation kernel. In: The Fifteenth IEEE International Conference on Automated Software Engineering. (2000) [116] Andronick, J., Chetali, B., Ly, O.: Using Coq to Verify Java Card Applet Isolation Properties. In Basin, D.A., Wolff, B., eds.: International Conference on Theorem Proving in Higher Order Logics (TPHOLs’03). Volume 2758 of LNCS., SpringerVerlag (2003) 335–351 [117] Tollitte, P.N., Delahaye, D., Dubois, C.: Producing certified functional code from inductive specifications. In Hawblitzel, C., Miller, D., eds.: CPP. Volume 7679 of LNCS., Springer (2012) 76–91 [118] Berghofer, S., Bulwahn, L., Haftmann, F.: Turning inductive into equational specifications. In Berghofer, S., Nipkow, T., Urban, C., Wenzel, M., eds.: TPHOLs. Volume 5674 of LNCS., Springer (2009) 131–146 [119] Balaa, A., Bertot, Y.: Fix-point equations for well-founded recursion in type theory. In Aagaard, M., Harrison, J., eds.: TPHOLs. Volume 1869 of Lecture Notes in Computer Science., Springer (2000) 1–16 [120] Barthe, G., Forest, J., Pichardie, D., Rusu, V.: Defining and reasoning about recursive functions: A practical tool for the coq proof assistant. In Hagiya, M., Wadler, P., eds.: FLOPS. Volume 3945 of LNCS., Springer (2006) 114–129 [121] Liu, C., Albitz, P.: DNS and BIND - help for system administrators: covers BIND 9.3 (5. ed.). O’Reilly (2006) [122] Cheung, S., Levitt, K.N.: A formal-specification based approach for protecting the domain name system. In: Proceedings of the IEEE International Conference on Dependable Systems and Networks. (2000) 25–28 [123] Eixarch, E.B., Betarte, G., Luna, C.D.: model. ECEASST 48 (2011)

A formal specification of the dnssec

[124] Bau, J., Mitchell, J.C.: A security evaluation of dnssec with nsec3. IACR Cryptology ePrint Archive 2010 (2010) 115 [125] Shin, W., Kiyomoto, S., Fukushima, K., Tanaka, T.: A formal model to analyze the permission authorization and enforcement in the android framework. In: Proceedings of the 2010 IEEE Second International Conference on Social Computing. SOCIALCOM ’10, Washington, DC, USA, IEEE Computer Society (2010) 944–951

114

BIBLIOGRAPHY

[126] May, M.J., Bhargavan, K.: Towards unified authorization for android. In: Proceedings of the 5th International Conference on Engineering Secure Software and Systems. ESSoS’13, Berlin, Heidelberg, Springer-Verlag (2013) 42–57 [127] Fragkaki, E., Bauer, L., Jia, L., Swasey, D.: Modeling and enhancing android’s permission system. In Foresti, S., Yung, M., Martinelli, F., eds.: ESORICS. Volume 7459 of Lecture Notes in Computer Science., Springer (2012) 1–18 [128] Felt, A.P., Wang, H.J., Moshchuk, A., Hanna, S., Chin, E.: Permission redelegation: Attacks and defenses. In: Proceedings of the 20th USENIX Conference on Security. SEC’11, Berkeley, CA, USA, USENIX Association (2011) 22–22 [129] Felt, A.P., Chin, E., Hanna, S., Song, D., Wagner, D.: Android permissions demystified. In: Proceedings of the 18th ACM Conference on Computer and Communications Security. CCS ’11, New York, NY, USA, ACM (2011) 627–638 [130] Davi, L., Dmitrienko, A., Sadeghi, A.R., Winandy, M.: Privilege escalation attacks on android. In: Proceedings of the 13th International Conference on Information Security. ISC’10, Berlin, Heidelberg, Springer-Verlag (2011) 346–360 [131] Chin, E., Felt, A.P., Greenwood, K., Wagner, D.: Analyzing inter-application communication in android. In: Proceedings of the 9th International Conference on Mobile Systems, Applications, and Services. MobiSys ’11, New York, NY, USA, ACM (2011) 239–252 [132] Bugliesi, M., Calzavara, S., Span` o, A.: Lintent: Towards security type-checking of android applications. In Beyer, D., Boreale, M., eds.: FMOODS/FORTE. Volume 7892 of Lecture Notes in Computer Science., Springer (2013) 289–304 [133] Armando, A., Costa, G., Merlo, A.: Formal modeling and reasoning about the android security framework. In Palamidessi, C., Ryan, M., eds.: TGC’12: 7th International Symposium on Trustworthy Global Computing. Volume 8191 of Lecture Notes in Computer Science., Springer Berlin Heidelberg (2013) 64–81 [134] Romano, A.: Descripci´on y an´alisis formal del modelo de seguridad de android. Technical report, Master’s thesis, Universidad Nacional de Rosario (2014) Available at: www.fing.edu.uy/inco/grupos/gsi/index.php?page= proygrado&locale=es. [135] Weiler, S., Ihren, J.: Minimally Covering NSEC Records and DNSSEC On-line Signing. RFC 4470 (Proposed Standard) (2006)