Confinement for Active Objects

In this paper, we provide a formal framework for the security of distributed active objects. Active objects communicate asynchronously implementing method calls via futures. We base the formal framework on a security model that uses a semi-lattice to enable multi-lateral security crucial for distributed architectures. We further provide a security type system for the programming model ASPfun of functional active objects. Type safety and a confinement property are presented. ASPfun thus realizes secure down calls.


I. INTRODUCTION
Formal models for actor systems become increasingly important for the security analysis of distributed applications. For example, models of organisational structures together with actors provide a basis for the analysis of insider threats, [44], [45].
Active objects define a programming model similar to actors [4] but closely related to object-orientation. An object is an active object if it serves as an access point to its own methods and associated (passive) objects and their threads. Consequently, every call to those methods will be from outside. These remote calls are collected in a list of requests. The unit comprising the object's methods and attributes and its current requests is called activity. The activity serves as a unit of distribution since it has a data space separate from its environment and can process requests independently. To enable asynchronous communication between distributed active objects, the concept of futures -promises for method call values -is used. Active objects are practically implemented in the Java API ProActive [14] developed by Inria and commercialized by its spin-off ActiveEON. Active objects are also a tangible abstraction for distributed information systems beyond just one specific language. ASP [15] is a calculus for active objects. ASP has been simplified into ASP fun -a calculus of functional active objects. ASP fun is formalized in Isabelle/HOL [28] thus providing a general automated framework for the exploration of properties of active objects.
In this paper, we use this framework to support security specification and analysis of active objects. The contributions of this paper are (a) the formalization of a novel security model for distributed active objects that supports multi-lateral security, (b) a type system for the static security analysis for ASP fun configurations, (c) preservation and the simple security property of confinement for well-typed configurations, (d) and an argument that secure down calls are possible for ASP fun . The novel security model [32] is tailored to active objects as it supports decentralized privacy specification of data in distributed entities. This is commonly known as multi-lateral security. To achieve it we break away from the classical dogma of lattices of security classes and use instead semi-lattices. In our model, we implement confinement. Every object can remotely access only public (L) methods of other activities. Methods can be specified as private (H) in an activity forbidding direct access. All other methods of objects are assumed to be L, partitioning methods locally into L and H. The security policy further forbids local information flow from H to L. To access an L-method remotely, the containing activity must also be visible to the calling activity in a configuration. In ASP fun , this visibility relation is implemented by activity references. In other active object programming languages, visibility could be given alternatively by an import relation or a registry.
In this paper, we provide an implementation of this security model in the ASP fun framework to illustrate its feasibility and the applicability of the ASP fun framework.
We design a security type system for ASP fun that implements a type check for a security specification of active objects and visibility. We prove the preservation property for type safety of the type system guaranteeing that types are not changed by the evaluation of an ASP fun configuration. The specification of parts of an active object as confined, or private (or H), is possible at the discretion of the user. This specification is entered as a security assignment into the type system; by showing a general theorem that confinement is entailed in well-typedness, we thus know that a well-typed program provides confinement of private methods. Although the confinement property intuitively suffices for security, at this point, a formal security proof is still missing. Moreover, implicit flows may occur. We thus provide a definition of noninterference for active objects. Based on that, we prove that a well-typed configuration does not leak information to active objects below in the hierarchy of the security model, i.e., multi-lateral security holds for well-typed configurations.
Remote method calls in ASP fun have no side-effects. Hence, secure down calls can be made. Confinement provides that no private information is accessed remotely and side-effect freedom guarantees that through the call no information from the caller side is leaked. Side effects are excluded in our formal model ASP fun because it is functional but this can be implemented into the run-time system of other active object languages.

Overview
We first review the semi-lattice for multi-lateral security (Section II-A) and ASP fun (Section II-B) introducing a running example of private sorting (Section II-C). Next, we describe how the semi-lattice model can be applied to active objects by instantiating it for ASP fun (Section III). We discuss secure down calls, a distinctive feature of ASP fun enabled by its functional nature and that moreover does not restrict common bi-directional communication patterns. To show the latter point, we present how to implement the Needham-Schroeder Public Key protocol in ASP fun . We describe what we mean by security, i.e., the attacker model and the information flows between active objects through method calls (Section III-C) and illustrate their enforcement on the running example. Following that, we present a type system for the static analysis of a configuration of active objects in ASP fun (Section IV). Properties of this type system are presented (Section V): (a) preservation as a standard result of type safety and (b) confinement. We then define noninterference and multi-lateral security formally to present a soundness theorem, i.e., well-typed configurations are multi-lateral secure. We finish the paper with a related work section and also give some conclusions (Section VI). An Appendix contains sections A . . . E with formal details, more examples, and (full) proofs.

A. Semi-Lattice Model for Privacy
We abstract the confinement property known from object oriented languages, e.g., private/public in Java, and use it as a blueprint for a model of privacy in distributed objects. Consider Figure 1: multi-level security models support strict hierarchies like military organization (left); multi-lateral security [5,Ch. 8] is intended to support a decentralized security world where parties A to E share resources without a strict hierarchy (right) thereby granting privacy at the discretion of each party. But lattice-based security models usually achieve the middle schema: since a lattice has joins, there is a security class A B C D E that has unrestricted access to all classes A to E. For a truely decentralized multi-lateral security model this top element is considered harmful. To realize confinement, we exclude the top element by excluding joins from the lattice. We thereby arrive at an algebraic structure called a semi-lattice in which meets always exist but not joins.
Semi-Lattice: The semi-lattice of security classes for active objects is a combination of global and local security lattices. The two lattices are used to classify the methods into groups and objects into hierarchies.

1) Local Classification:
The local classification is used to control the information flow inside an object, where methods are called and executed. For every active object there is the public (L) and a private (H) level partitioning the set of this active object's methods. The order relation of the lattice for local classification is the relation ≤ defined on {L, H} as {(L, L), (L, H), (H, H)}.
2) Global Classification: The purpose of the global classification is to control the course of information flows between methods of globally distributed objects and lead their information together in a common dominating activity. To remotely access active objects, the key is their identity (we use α, β to denote identities). As classes for the global lattice we use subsets of the set of all activity identities I. These subsets of compartments build the lattice of global classes, the powerset lattice P(I) over activity identities I.
In a concrete configuration, the global class label of an activity is the set of activity identities to which access is granted. For example, with respect to the Hasse diagram in Figure 2, an object at global level {α, β} ∈ P(I) can access any part (method) of an object labeled as {β} or {α} or {} but only if this part is additionally labeled as L. Vice versa an object at level {α} can neither access L nor H parts of objects at level {β} nor any parts at level {α, β} but only L parts at level {}. Thus the classification of parts of an active object needs to combine labels.
3) Combination of Lattices: The security model of the semi-lattice needs to combine the local and global classification scheme. As result, a security class is a pair of local and global class (S, δ). We want to impose confinement of methods in order to realize multi-lateral security with our model. Thus, we have to define the combination of the two constituting lattices such that its order relation corresponds to a multilateral information flow relation. I.e., private methods of an object are not accessible by any other than the object itself.
Consequently, the new order for security classes is defined as follows. The combined security class ordering for active objects is defined such that a method class (H, δ) dominates (L, δ) and also (L, δ ) for all δ ⊆ δ but no other (X, δ 0 ) dominates (H, δ). The combination of local and global types into pairs gives a partial order where the vertical notation φ ξ abbreviates φ ∧ ξ and < S = {(L, H)} denotes the strict ordering on the local security classes. Consequently, meets exist but no joins. The partial order CL is thus just a semi-lattice as illustrated by an example in Figure 2 (right). B. Functional Active Objects: ASP fun ASP fun uses a slightly extended form of the simplest ςcalculus from the Theory of Objects [1] by distributing ςcalculus objects into activities. The calculus ASP fun is functional because method update is realized on a copy of the active object: there are no side-effects.
1) ς-calculus: Objects consist of a set of labeled methods [l i = ς(y)b] i∈1..n . Attributes are considered as methods not using the parameters. The calculus features method call t.l(s) and method update t.l := ς(y)b on objects where ς is the binder for the method parameter y. Every method may also contain a "this" element representing the surrounding object. Note, that the "this" is usually [1] expressed as an additional parameter x to each method's ς scope but we use for this exposition literally this to facilitate the understanding. It is, however, important to bear in mind that formally this is a variable representing a copy of the current object and that this variable is scoped as a local variable for each object. The ςcalculus is Turing complete, e.g. it can simulate the λ-calculus. We illustrate the ς-calculus by our example below.
2) Syntax of ASP fun : ASP fun is a minimal extension of the ς-calculus by one single additional primitive, the Active, for creating an activity. In the syntax (see Table I) we distinguish between underlined constructs representing the static syntax that may be used by a programmer, while futures and active object references are created at runtime. We use the naming TABLE I  ASPFUN SYNTAX convention s, t for ς-terms, α, β for active objects, f k , f j for futures, Q α , Q β for request queues.
3) Futures: A future can intuitively be described as a promise for the result of a method call. The concept of futures has been introduced in Multilisp [26] and enables asynchronous processing of method calls in distributed applications: on calling a method a future is immediately returned to the caller enabling the continuation of the computation at the caller side. Only if the method call's value is needed, a so-called wait-by-necessity may occur. Futures identify the results of asynchronous method invocations to an activity. Technically, we can see a future as a pair consisting of a future reference and a future value. The future reference points to the future value which is the instance of a method call in the request queue of a remote activity. In the following, we will use future and future reference synonymously for simplicity. Futures can be transmitted between activities. Thus different activities can use the same future.

4) Configuration:
A configuration is a set of activities The unordered list (f j → s j ) j∈Ii represents the request queue, t i the active object, and α i ∈ dom(C) the activity reference. A configuration represents the "state"of a distributed system by the current parallel activities. Computation is now the state change induced by the evaluation of method calls in the request queues of the activities. Since ASP fun is functional, the local active object does not change -it is immutable -but the configuration is changed globally by the stepwise computation of requests and the creation of new activities. The constructor Active(t) activates the object t by creating a new activity in which the object t becomes active object. Although the active object of an activity is immutable, an update operation on activities is provided. It performs an update on a freshly created copy of the active object placing it into a new activity with empty request queue; the invoking context receives the new activity reference in return. If we want to model operations that change active objects, we can do so using the update. Although the changes are not literally performed on the original objects, a state change can thus be implemented at the level of configurations (for examples see [28]). Efficiency is not the goal of ASP fun rather minimality of representation with respect to the main decisive language features of active objects while being fully formal. 5) Results, Programs and Initial Configuration: A term is a result, i.e., a totally evaluated term, if it is either an object (like in [1]) or an activity reference. We consider results as values.
In a usual programming language, a programmer does not write configurations but usual programs invoking some distribution or concurrency primitives (in ASP fun Active is the only such primitive). This is reflected by the ASP fun syntax given above. A "program" is a term s 0 given by this static syntax (it has no future or active object reference and no free variable). In order to be evaluated, this program must be placed in an initial configuration. The initial configuration has a single activity with a single request consisting of the user program: Sets of data that can be used as values are indispensable if we want to reason about information flows. In ASP fun , such values can be represented as results (see above) to any configuration either by explicit use of some corresponding object terms or by appropriate extension of the initial configuration that leads to the set-up of a data base of basic datatypes, like integers or strings.
6) Informal Semantics of ASP fun : Syntactically, ASP fun merely extends the ς-calculus by a second parameter for methods (the first being this) and the Active primitive but the latter gives rise to a completely new semantic layer for the evaluation of distributed activities in a configuration.
Local semantics (the relation → ς ) and the parallel (configuration) semantics (the relation → ) are given by the set of reduction rules informally described as follows (see Appendix C for the formal semantics).
• CALL, UPDATE, LOCAL: the local reduction relation → ς is based on the ς-calculus. • ACTIVE: Active(t) creates a new activity α, with t as its active object, global new name α, and initially no futures; in ASP fun notation this is α[∅, t]. • REQUEST, SELF-REQUEST: a method call β.l(t) creates a new future f k for the method l of active object β placing the resulting future value onto β's request queue; the future f k can be used to refer to the future value β.l(t) at any time. • REPLY: returns result, i.e., replaces future f k by the referenced result term, i.e., the future value resulting from some β.l(t). • UPDATE-AO: active object update creates a copy of the active object and updates the active object of the copythe original remains the same (functional active objects are immutable).

C. Running Example: Private Sorting
As an example for a standard program consider the implementation of quicksort as an active object χ illustrated in Figure 3. The operations we use are :: for list cons, @ for list append, # for list length, hd for the list head, and a let construct (see [28] for details on their implementation). The quick sort algorithm in χ is parametric over a method "ord", a numerical value, that is used in method "part". This method ord is assumed to be available uniformly in the target objects contained in the list that shall be sorted. We omit the parameter to calls of ord because it is unused, i.e., the empty object [].
The following controller object α holds a list of active objects (for example [β 1 , β 2 , β 3 ] in Figure 3 but generally arbitrary thus represented as . . . below). Controller α uses the The target objects contained in α s list (omitted) are active objects of the kind of β below. Here, the n in the body of method ord is an integer specific to β and the field income shall represent some private confidential data in β.
If active objects of the kind of β represent principals in the system, it becomes clear what is the privacy challenge: the controller object α should be able to sort his list of βprincipals without learning anything about their private data, here income.

III. SEMI-LATTICE MODEL FOR ASP FUN
As a proof of concept, we show that the calculus of functional active objects ASP fun gives rise to a fairly straightforward implementation of the security semi-lattice by mapping the concepts of the security model onto language concepts as follows.
• The global class ordering on sets of activity identities corresponds to the sets of activity references that are accessible from within an activity. We name this accessibility relation visibility (see Definition 3.1). It is a consequence of the structure of a configuration thereby at the discretion of the configuration programmer. • The local classification of methods into public L and private H methods is specified as an additional security assignment mapping method names to {L, H} at the discretion of the user. • Based on these two devices for specifying and implementing a security policy with active objects we devise as a practical verification tool a security type system for ASP fun . The types of this type system correspond quite closely to the security classes of the semi-lattice defined in Section II-A: object types are pairs of security assignment maps and global levels.
A. Assigning Security Classes to Active Objects Visibility: We define visibility as the "distributed part" of the accessibility within a configuration. It derives from the activity references and thus represents the global security specification as programmed into a configuration. Definition 3.1 (Visibility): Let C be a configuration with a security specification sec partitioning the methods of each of C's active objects locally into H and L methods. Then, the relation ≤ V I is inductively defined on activity references by the following two cases.
We use the vertical notation φ ξ to abbreviate φ∧ξ; for context variable E see Appendix C. We then define the relation called visibility sec C as the reflexive transitive closure over ≤ V I for any C, sec. We denote the visibility range using Definition 3.1 as The visibility relation extends naturally to a relation on global levels: every activity α ∈ C may be assigned the global level corresponding to the union of all its visible activities V I sec (α, C). This relation is a subrelation of the subset relation on the powerset of activity identities introduced before and thus also a partial order. We use it as the semantics of the subtype relation in Section IV.
Assigning Security Classes to Example: To illustrate how activities are labeled in the semi-lattice model using visibility, consider the running example above where we assume the list in controller α to contain various active object references [β 0 , . . . , β n ]. We assign to each activity the global class containing its own identity and those of all its visible activities. For our example, the global class of controller α would be the following.
The global classes δ βi of the β i objects and δ χ in turn contain all their visible objects' classes. Thus, the global classes are ordered δ βi ⊆ δ α for all i and δ χ ⊆ δ α . The security classification of methods assigns pairs of global classes and local levels to method names, for example, ord βi → (L, δ βi ) and income βi → (H, δ βi ). Practical Classification of Objects: The pairs (S, δ) in the partial order CL (see Section II-A3) are the security classes for methods of active objects. The semi-lattice is actually defined as a partial order on object methods rather than objects. To classify objects we consider only the global part of the classification, i.e., the second δ component because all methods of an active object have this δ in common. Intuitively, this factorization corresponds to drawing objects as borders into the semi-lattice structure (see Figure 4). These borders represent the confinement zone of an active object.
Formally, we consider an object class to be the factorization ([l i → S i ], δ): a pair of a security assignment to {L, H} for each method l i of an object and the object's global class δ common for all parts. An activity contains one active object but may contain various passive objects. The security assignment of an active object must be defined for all contained objects (see rule SECASS SUBSUMPTION in Section IV).

B. Secure Down Calls
In a distributed system with a nontrivial security classification of communicating objects, secure down calls are not possible because they would violate the security policy of "no-down-flows" of information. In general, a method call represents an information flow to the remote object in the form of the request itself and the parameters passed; its response flows information back in the form of a reply. Therefore, secure method communication is trivially restricted to objects of one class -otherwise one direction would contradict the policy "no-down-flows". This catch-22 situation can be overcome if we exclude side-effects: the requests do not leave traces in the remote object. In ASP fun this is given implicitly by the semantics because requests created by method calls in the remote object are not accessible by the remote object itself. However, the reply may flow information up. Thus, information does flow back, i.e. up.
As an overall result of the properties presented in this paper we can infer that secure down calls are possible. The reasoning is as follows. We assume as given a configuration together with a security specification sec partitioning a portion of the methods into public (L) and private (H). If this configuration can be type checked according to our type system, it is secure, i.e., we know it has confinement and is noninterfering as we are going to see in Section V. Therefore, futures can be securely used in higher security classes, i.e., method results may flow up but, since no implicit flows exist, information is not leaked in the process.
Side-effect freedom does permit to securely call down because the call leaves no visible trace. But does this not also exclude any mutual information exchange on the same level? It might seem so, but fortunately, if we have two activities that are in the same class, methods calls between them are possible permitting bidirectional information flow. As an example, an implementation of the Needham-Schroeder public key protocol is given next.

Needham Schroeder Public Key Protocol (NSPK) in ASP fun
This example illustrates that inside one security class mutual information exchange is possible between different activities. The easiest way to illustrate this is to use a protocol. We use the corrected short form of the Needham Schroeder Public Key Protocol (NSPK) originally published by [40]. The originally published protocol missed out the B inside the encrypted message to A in step two thereby giving rise to the wellknown attack of [33].
The protocol is usually written as follows using public keys K A , K B known globally and their secret counterparts establishing nonces N A , N B in the process of authentication.
In ASP fun , the protocol is implemented as a set of methods between two activities A and B. We omit details about decoding and keys because it is clear that they can be implemented and we want to highlight the communication process.
then (this.knows := NB).NA := NA else this.knows := error, The protocol can be executed by invoking method A.step 1 which in turn invokes the step B.step 2 and A.step 3 . In each of the steps the nonces are created, encrypted and tested between the method calls. If the communicated messages adhere to the protocol, i.e., the nonces and ids correspond to what has been sent in earlier steps, the own nonces are updated into the methods A.NA and B.NB and the other's nonces in the respective method "knows". Otherwise, the protocol failure is recorded as "error" in method knows. This protocol implementation illustrates that mutual information flows are possible locally within one security class. The type system that we present in the following Section IV accepts this configuration since the calls are of the same global level δ.

C. Security Analysis
In language based security, we may use the means provided by a language to enforce security. That is, we make use of certain security guarantees that correspond to implicit assumptions concerning the execution of programs. The language introduces a security perimeter because we assume that the language compilation and run-time system are respected (below the perimeter) while the language is responsible for the security above the perimeter by virtue of its semantics and other language tools, e.g. static analysis by type checking. We now describe the security goal of confidentiality addressed in this paper and elaborate on the attacker model for active objects.
Security Goal Confidentiality: A computation of active objects is an evaluation of a distributed set of mutually referencing activities. Principals that use the system can observe the system only by using the system's devices. We make the simplifying assumption that principals can be identified as activities. Principals, objects, programs and values are thus all contained in this configuration. There are no external inputs to this system -it is a closed system of communicating actors. We concentrate in this paper on confidentiality, i.e. activities should not learn anything about private parts of other activities neither directly nor indirectly. Integrity is the dual to this notion and we believe that it can simply be derived from our present work by inverting the order relation.
Attacker Model: As a further consequence to the language based approach to security, we restrict the attacker to only have the means of the language to make his observations. Consequently, we also consider the attacker -as any other principal -as being represented by an activity. The attacker's knowledge is determined by all active objects he sees, more precisely their public parts. If any of the internal computations in inaccessible parts of other objects leak information, the attacker can learn about them by noticing differences in different runs of the same configuration. Inaccessible parts of other objects must be their private methods or other objects that are referenced in these private parts. The language semantics and the additional static analysis must guarantee that under the assumption of the security perimeter an attacker cannot learn anything about private parts.

D. Information Flow Control
Information flow control [20] technically uses an information flow policy which is given by the specification of a set of security classes to classify information and a flow relation on these classes that defines allowed information flows. System entities that contain information, for example variables x, y, are bound to security classes. Any operation that uses the value of x to calculate that of y, creates a flow of information from x to y. This operation is only admissible if the class of y dominates the class of x in the flow relation, formally written δ x δ y where δ e denotes the class of entity e. The concept of information flow classically stipulates that the security classes together with the flow relation as an order relation on the classes are a lattice [19], [17]. We differ here since we only require a semi-lattice.
Information Flow Control for Active Objects: Information is contained in data values which are here either objects or activity references (see Section II-B). To apply the concept of information flow control to configurations of active objects, we need to interpret the above notions of security classes, their flow relation, and the entities that are assigned to the security classes: we identify the classes of our security model as the security classes of methods and the flow relation as the semilattice ordering on these classes (see Section II-A). Flows of information local to objects are generated by local method calls between neighboring methods of the same object. These are regulated by the local L/H-classification of an object's methods (H may call L and H -but L only L). Global flows result from remote method calls between objects' methods. The combined admissible flows have to be in accordance with a concrete configuration and its L/H specification.

E. Enforcing Legal Information Flows
To illustrate the task of controlling information flows, we first extend the intuition about information flow to configurations of active objects. An active object sees only other active objects that are directly referenced in its methods or those active objects that are indirectly visible via public methods of visible objects. From the viewpoint of one active object, information may flow into the object and out of the object. For each direction, there are two ways how information may flow: implicit or explicit (direct) flows. Information flows explicitly into an object by parameters passed to remote calls directed to the object's methods; it may also flow implicitly into the object simply if the choice of which method is called depends on the control flow of the calling object. Similarly, information flows explicitly out of an active object by parameters passed to remote method calls and implicitly out of it, if the choice depends on the object's own control flow. Some of these flows are illustrated on our running example next.
Running Example: Implicit Information Flow: We will now finally illustrate the security model on the running example showing implicit information flows of active objects introduced above in Section II-C. Let us assume that the implementation of the β-objects featuring in the controller's list had the following implementation.
Let us further assume that ord is again a public method and income again the private field of β. We have here a case of an implicit information flow. Since the guard of the if-command in ord depends on the private field income, effectively the order number of a β-object is 1 if the income of β is more than 1000 else 0. In our security model this control flow represents an illicit flow of information from a high level value in β to its public parts and is thus visible to the remote controller. This should not be the case since H L. It should thus be detectable by an information flow control analysis. We will show next how to detect it statically by a security type system.

IV. SECURITY TYPE SYSTEM
Before formalizing security of active objects and defining a type system that implements rules for a static analysis, we summarize the security considerations so far and motivate the upcoming type system and proofs.

A. Intermediate Summary, Motivation, and Outlook
In a configuration of active objects we may have direct (explicit) and implicit information flows through method calls which are controlled differently.
• To guarantee only legal information flows on direct calls we rely on the labeling of methods by L and H and on the global hierarchy. This corresponds to the simple security property of confinement: remote method calls can refer only to low methods of visible objects. Confinement can be locally checked. It is decidable since it corresponds to merely looking up method labels in a security assignment. • We will use a program counter P C that records the current security level of a method evaluation. Locally, within the confinement zone of an activity, accessing Hmethods in L-methods may create implicit flows -as seen in the example. To detect such flows and protect the confidential information from flowing out of the confinement zone of the activity, the program counter records these dependencies by increasing to H. In combination with the method labels, the P C thereby allows associating the calling context with the called method. Implemented into type rules, this enables static checking and thus controlling of information flows in evaluations of configurations. As a security enforcement mechanism of our multilateral security model for active objects, we propose a security type system, i.e., a rule set for static analysis of a configuration with respect to its methods' security assignment. The idea of a security type system is as follows. Not all possible programs in ASP fun are secure. In general, for example, any method can be accessed in an active object. The purpose of a type system for security is to supply a set of simple rules defining types of configurations enabling a static check (before runtime) whether those contain only allowed information flows.
The above described cases of information flows need to be implemented in the type rules such that the rules allow to infer a type just for secure configurations and otherwise reject them. The first direct case of information flow is intuitively simple, as it boils down to locally looking-up the security level of a method before deciding whether a remote call from up in the hierarchy can be granted. The "up in the hierarchy" is encoded in a subtype relation encoding the global hierarchy described by the visibility relation. After the presentation of the type system in this section, we prove in the following Section V that confinement is a security property implied by it.
How to avoid and detect implicit flows, is more subtle: the combination of a program counter P C with the called method's security label grants us to combine the provenance of one call with the security level of the call context. However, this combination needs to adhere to the security specification for all runs of a program and thus all possible calls in a context. The appropriate notion of security for this is noninterference: in all runs the observable (low) parts of configurations need to look "the same". Therefore, we first introduce a notion of noninterference for active objects based on which we will then be able to express the absence of implicit flows and prove multilateral security. The definition of noninterference and proofs of properties are contained in Section V. We first introduce the type system.

B. Type System
Type Formation: We need to provide types for objects and for configurations of active objects; the latter by mapping names of futures and activities to object types. The twodimensional classification of local and global security described above translates directly into the object types of the security type system. A type is a pair .n provides the partition of methods into public (L) and private (H) methods for the object. The other element δ α of an object type represents the global classification of an object. This global level corresponds to the classification of the object's surrounding activity α derived from its visibility. We adopt the following naming conventions for variables. δ stands for the global part of a type. We use A to denote security assignments, e.g. A = [l i → S i ] i=1..n . S i , or simply S, stands for levels L or H. In general, we use indexed variables to designate result values of a function, e.g., S i for the level value of method l i -also expressed as A(l i ). We use Σ for object types Σ = ([l i → S i ] i=1..n , δ). To map an object type Σ to its security assignment or its global part, respectively, we use the projections ass(Σ) and glob(Σ). We formally use a parameter sec as the parameter for the overall methods' security assignment of an entire configuration C. A triplet of maps is a configuration type Γ act , Γ fut , sec assigning types to all activities and futures of a configuration in addition containing the security assignment sec.
Typing Relations: A typing judgement T ; S x : ([l i → S i ] i=1..n , δ) reads: given type assumptions in T , term x has type ([l i → S i ] i=1..n , δ) in the context of a program counter S ∈ {L, H}. A program counter (P C) is a common technique in information flow control originating in Fenton's Data Mark Machine [21]. The P C encodes the highest security level that has been reached in all possible control flows leading to the current control state. In a functional language, like ASP fun , this highest security level of all execution paths simply is the level of the evaluation context for the term x. Thus, the P C is some S ∈ {L, H} denoting the security label of the local context. The type environment T contains types Σ for the parameter variables y and types for the parameter this both paired with the local security level S representing their local P C.
Subsumption Rules: Subsumption means that an element of a type also has the type of its super-type. It is responsible for making the partial order relation on global levels a subtype relation. Intuitively, GLOB SUBSUMPTION says that if a term can be typed in a low context it may as well be "lifted", i.e., considered as of higher global level thereby enforcing (together with TYPE CALL below) that only L-methods can be accessed remotely. This corresponds to the confinement property as formally shown in Section V-B. The local security class ordering is L ≤ H and features implicitly in the type system in the form of a second -the local -subsumption rule. Finally, the rule SECASS SUBSUMPTION allows the security assignment type of an object to be extended. This rule is necessary to consider an object also as a local object inside another (active) object adopting its security assignment.  Object Typing: The object typing rules in Table III describe how object types are derived for all possible terms of ASP fun . The VAL-rules state that type assumptions stacked on the type environment T left of the turnstile can be used in type judgments. These rules apply to the two kinds of environment entries for this and for the y-parameter. Since the this represents the entire object value itself, its P C is derived as the supremum of all security levels assigned to methods in it. We express this supremum as the join over all levels i∈1..n S i . The other rules are explained as follows. TYPE OBJECT: if every method l i of an object is typeable with some local type S i ∈ {L, H} assigned to it by the assignment component A of Σ, then the object comprising these methods is typeable with their maximal local type. Thus, objects that contain H methods cannot themselves be contained in other Lmethods. Otherwise, local objects containing confidential parts could be typed with GLOB SUBSUMPTION at higher levels (see the Appendix for a "borderline example" illustrating this point). Only objects that are purely made from L-methods can be accessed remotely in their entirety. Albeit this strong restriction, the CALL rule permits selectively accessing L methods of such objects (see below). The PC guarantees that all method bodies t i are typeable on their given privacy level S i . The rule TYPE CALL is the central rule enforcing that only L methods can be called in any object -locally or remotely. Initially, a call o.l j (t) can only be typed as Σ = ([l i → S i ] 1=1..n , δ) for the δ of the surrounding object o.
Although the P C in the typing of o is (by TYPE OBJECT) the maximal level of all methods, we may still call L-methods on objects that are typed with P C as H. The P C in the typing of the resulting call o.l j (t) is coerced to S j , i.e., the security level assigned to the called method. This prevents H methods from being callable remotely while admitting to call methods on objects that are themselves typed in a H-PC. Because of the rule GLOB SUBSUMPTION any method call o.l j (t) : (A, δ) can also be interpreted as o.l j (t) : (A, δ ) for δ δ but this is restricted to L contexts: a method call typeable in an H context cannot be "lifted", i.e., it cannot be interpreted as welltyped with δ ; to prevent this, the PC in GLOB SUBSUMPTION is L thus excluding CALL instantiations for methods l j with S j = H. UPDATE: an update of an object's method is possible but conservatively, i.e., the types must remain the same.
Configuration Typing: The rules for configurations (see Table IV) use the union of all futures of a configuration. The rules for configurations anticipate two semantic properties of futures in well formed ASP fun configurations. We use well-formedness of ASP fun configurations as defined in [28]; in brief: there are no dangling references.
Property 4.2 (Unique Future Home Activity): Every future is defined in the request queue of one unique activity.
We denote this unique activity α as futact C (f k ).
Next, every future f k in a well formed configuration C is created by a call to a unique label in its home activity. We denote this unique method label as futlab C (f k ). We omit the configuration C for the previous two operators if it is clear from context.
The configuration type rules link up types for activities and futures with the local types of terms in active objects and request lists (see Table IV).
TYPE ACTIVE allows to transfer the type of an object term to its activation which coerces the types of activities and activity references to coincide with the types of their defining objects. This is achieved together with TYPE ACTIVE OBJECT REFERENCE and the clause Γ act , Γ fut , sec , ∅ a : Γ act (α) of TYPE CONFIGURATION.
TYPE FUTURE REFERENCE similarly assigns the types for the future references in Γ fut . For a given activity α, this rule further coerces the P C for the typing of f k to coincide with A α (futlab(f k )), i.e., α's security assignment applied to the label that leads to the instance of f k .
The rule TYPE CONFIGURATION ensures consistency between the type maps Γ fut , Γ act , and the overall security assignment sec. It looks rather complex but it essentially only scoops up what has been prepared by the other rules. The first two clauses ensure that the domains of activities coincide with the configuration domain and similarly for futures that the future type map Γ fut is defined over precisely all futures in all activities. The third clause integrates the security specification sec to be respected by the individual security assignments of activities. The last large clause of TYPE CONFIGURATION specifies first that the activity types assigned to activity references by Γ act coincide with their active object types. The second part of that clause addresses the future types in Γ fut . Note, that in the context of this clause we may assume α = futact(f k ) by Property 4.2. The clause ensures that the types assigned by Γ fut coincide with the ones assigned by Γ act to their home activity. Additionally, this final clause ensures that the request Q(f k ) must have the type assigned by the future map Γ fut for this future f k with the P C that corresponds to the P C assigned by the security assignment in the home activity.

C. Running Example: Type System Checks Example
For the sake of argument, we illustrate the application of the type system with an inconsistent constraint on the assignment sec for the example in Section II-C. The extended implementation as discussed in Section III-E contains the following changed ord function (we repeat the code here for convenience). If we specify income as private, this extended version of the running example may contain an implicit illegal information flow. Any security assignment sec that fulfills the constraint must be fallacious since the call β i .ord in the manager object α reveals information about the confidential (H) value of income. The type system rejects any such sec since no consistent type can be inferred for the configuration in this case as we illustrate next. The failed type checking thus proves that for the extended configuration all specifications would have to specify income → L because the assumption income → H was inconsistent.
The global classification is derived according to the visibility relation (Definition 3.1) from the example's configuration TYPE ACTIVE Γ act , Γ fut , sec , T ; S a : Σ Γ act , Γ fut , sec , T ; S Active(a) : Σ TYPE ACTIVE OBJECT REFERENCE β ∈ dom(Γ act ) Γ act , Γ fut , sec , T, M β β : Γ act (β) as δ βi δ α for all i. To be able to type the call to β i .ord in manager object α this method must be an L-method according to TYPE CALL. Hence, we need to have the following extended constraint on sec.
We show now that β i (for an arbitrary i in the configuration) cannot be typed with this type constraint. The final step in a type inference to arrive at a type Γ act (β i ) for β i can only be an instance of TYPE OBJECT which looks as follows. where true is a boolean object containing methods if, then, and else. The details of this boolean object and its typing as well as the details of the following abridged reasoning are contained in Appendices A and B. The main point that we can see from this implementation is that the type A(ord) is coerced by the type A(if), i.e., it must hold that A(ord) = A(if) in Σ βi . This is the case, because t ord is a call to the method if. According to the rule TYPE CALL, the P C must thus be S if which corresponds here to A(if) and coincides with the P C A(ord) in the above instance of TYPE OBJECT, i.e., A(ord) = A(if). Now, the remaining argument just shows that A(if) must be H. In short form, the reasoning for the latter goes as follows. By assumption, A(income) must be H. Thus according to TYPE CALL and VAL SELF, this.income is typeable only with P C as H. We must apply TYPE CALL twice, to type this. > 0(this.div 10 3 (this.income)). The P C for typing this is H each time because it must be the same as the P C (named S j ) in typing the parameter (named t in the rule TYPE CALL) and the previous typing of the parameter this.income has a H-P C. The P C in the application of the rule TYPE UPDATE is then also coerced to H in the typing of the newly inserted body method l j , here "if". Hence, this update coerces the P C S if to be H, i.e., A(if) to be H. The two following updates do not change the security type of the method if. We are finished since as we have seen above A(if) = A(ord). Thus, A(ord) must be H and cannot be typed L as would be necessary to call this method remotely in α. The typing fails. We have a contradiction to the initially required specification that income be private. Since this was the only assumption, if follows by contraposition that income must be L to make the configuration typeable. This illustrates the correctness of the type system by example: the configuration C of our running example cannot be typed with the constraint income → H since any attempt to infer a type Γ act , Γ fut , sec for it fails. The type inference reveals the dependency between ord and income: a security leak because it would enable implicit information flows from β i 's private part to χ.
In the following, we provide general proofs showing that the type system is sound, i.e., it generally implies security not just for the example.

A. Preservation
Type safety includes always a preservation theorem: if a program can be typed, the type has to be preserved by the evaluation of the program -otherwise the guarantees encoded in the types would be lost. In our case, since configurations dynamically change during the evaluation with the reduction relation → , the preservation has a slightly unusual form as the configuration type actually changes. But this change is conservative, i.e., dynamically created new elements are assigned new types but old types persist, as represented below by ⊆. Alongside the configuration types, also the security class lattice is extended likewise in a conservative way by extension of the visibility relation.
The proof of this theorem has two parts. The first part shows a local preservation property for the part of the type system that describes secure method calls at the level of objects, i.e., the rules depicted in Tables II and III. The second part of the proof addresses the typing rules at the global level, i.e., the configuration typing rules depicted in Table IV. Both proofs are straightforward using the induction schemes corresponding to the inductive rule definitions of the type rule definitions. Albeit the relatively small size of the computation model ASP fun , these rules are fairly complex. Hence to avoid mistakes in these proofs we have formalized them in Isabelle/HOL. The Isabelle/HOL sources can be found at https://sites.google.com/ site/floriankammueller/home/resources.

B. Confinement
Confinement is the property of our type system encoding the principal idea of the security model: if a method of an object can be called remotely, it must be a public L method. As a preparation to proving confinement, we present next a chain of lemmas that lead up to it. Let o be an object and T be an arbitrary type environment throughout the following formal statements.
The type rules for subsumption allow that types of objects can be "lifted", i.e., objects can have more than one type. We lose uniqueness of type judgments. To overcome this, we use a well-known trick (already been used in the Hindley-Milner type system for ML to accommodate polymorphic types) to regain some kind of uniqueness: minimal types. Using slight generalization and contraposition, the previous lemma can be strengthened to the following key lemma for confinement. Γ act , Γ fut , sec , A α (futlab(f k )) ML f k : Γ act (α) .

Theorem 2 (Confinement):
If a future f k is typeable with an arbitrary P C S as of type δ strictly larger than the global level of f k 's home activity α, then f k has been initially generated from a call to an L method of α. Formally, let C : Γ act ∪ sec, Γ fut , α[Q, a] ∈ C, and f k ∈ dom(Q) with Γ act , Γ fut , sec , T ; S f k : (A x , δ) where Γ act (α) δ .
The proof of confinement is basically just a combination of Lemma 5.4 and Proposition 5.5. The chain of lemmas and confinement have been proved in Isabelle/HOL as well.

C. Noninterference
Confinement can be considered as a simple security property because it is similar to a safety property: confinement is preserved on every trace of execution of a configuration. Intuitively it seems to imply confidentiality of private parts but this is only true for direct information flows. Confidentiality necessitates that no information is leaked to an outsider even considering implicit information flows as described in Section III-C. Based on those observations, we define the general property of confidentiality as noninterference, informally meaning that an attacker cannot learn anything despite his ability to observe configurations on all runs while comparing values that he can see: a difference in the value of the same call allows deductions about a change in hidden parts. The formal definition of noninterference for active objects in general [32] is a bisimulation over the indistinguishability relation ∼ α on configurations. We omit the rather technical definition of indistinguishability referring to Appendix D. Essentially, indistinguishability says that C and C 1 appear equal to the attacker α's viewpoint even if they differ in secret parts; noninterference means that this appearance is preserved by the evaluation of configurations.
Definition 5.6 (α-Noninterference): If configuration C is indistinguishable to any C 1 for α with respect to sec and remains so under the evaluation of configurations → , then C is α-noninterfering. Formally, we define α-noninterference C sec as follows.
A main result for our security type system is soundness: a welltyped configuration is secure; α-noninterference holds for the configuration, i.e., it does not leak information. Theorem 3 (Soundness): For any well-typed configuration C, we have noninterference with respect to α ∈ C, i.e., The proof of this theorem is a case analysis distinguishing the cases where a reduction step of the configuration has happened in the α-visible part or outside it. In the latter case, a difference in the visible part would mean a breach of confinement. Within the visible part, a straightforward case analysis shows that what is possible in one configuration must also be possible in the other, indistinguishable, one, since those parts are isomorphic; hence the same reduction rules apply. We have formalized the definitions of indistinguishability, noninterference, and multilateral security, as well as the statements of the theorems in Isabelle/HOL -only the soundness proof is not yet formalized but a detailed paper proof is contained in Appendix E.
The parameterization of the attacker as an active object α grants the possibility to adapt the noninterference predicate. If we universally quantify α in our definition of noninterference, we obtain a predicate where each object could be the attacker corresponding to multi-lateral security.
Definition 5.7 (Multi-Lateral Security): If a configuration C is α-noninterfering for all α ∈ dom(C) then multi-lateral security holds for C. Since no α is fixed in the type statement, the soundness theorem holds for any α if the configuration is well-typed. Hence, well-typing implies immediately multi-lateral security.

VI. RELATED WORK AND CONCLUSIONS
The main difference of our approach is that we specifically address functional active objects. We also use a nonstandard security model [32] for multi-lateral security tailored to distributed active objects. Other work on actor security, e.g. [30], is based on message passing models different to our high level language model. The paper [6] addresses only direct information flows in active objects. The priority program Reliably Secure Software Systems (RS3) of the German Research Foundation (DFG) [34] addresses in its part project MoVeSPAcI [42] security of actor systems using an event based approach without futures.
The Distributed Information Flow Control (DIFC) approach [39] provides support for Java programs (Jif) to annotate programs with labels "Alice" and "Bob" for information flow control. In this approach objects are not first class citizen. The formal model [52] uses a lambda calculus λ DSec to accommodate the rich hierarchy of labels but (Java) objects are not in the calculus. They use an elegant approach to prove noninterference of a type system for labels pioneered by [43] Pottier and Simonet. This approach does not apply to parallel languages since the evaluation order of parallel processes is not deterministic.
Reactive Noninterference, e.g. [36], [10], adopts a reactive system view. Some of these works, e.g. [41], use a while language in their formal models and bisimulation based noninterference notions but the semantics is message passing by events.
The language based approach has its beginnings in [51], [50]. [49] offer the first model of language based information flow control for concurrency, later refined by Boudol and Castellani addressing scheduling problems and related timing leaks. Many works have followed this methodology (see [48] for an overview). However, most works consider imperative while languages with various extensions like multi threading. Barthe and Serpette [9] have considered security type systems for ς-objects but no distribution. Later, Barthe and others [8], [31] provide information flow control for Java-like languages. Sabelfeld and Mantel consider message passing in distributed programs [47], [35]. These works use the secure channel abstraction, i.e. connecting remote processes of the same security class via secure channels integrating security primitives.
Distributed security has also been considered in many works in the setting of process algebras most prominently using pi calculus by [38] (see [24] and [46] providing overviews). Commonalities of process algebra based security to our work are the bisimulation notion of noninterference and asynchronous communication. The spi calculus by [3] extends the pi calculus with constructs for encryption and decryption. It is thus a forerunner for current work that integrates encryption primitives into languages, most prominently homomorphic encryption [25]. The applied pi calculus [2] in contrast is a generalisation of Milner's original pi calculus with equational theories, i.e. functions and equations. Thereby, extensions by cryptographic primitives are possible. The applied pi calculus is used for security protocol verification. An implementation is the model checker ProVerif by [16]. There is a line of research on mobile calculi that use purely functional concurrent calculi. A few representative papers are by [29] on the pi calculus and [27] for the security pi calculus. An impressive approach on information flows for distributed languages with mobility and states [37] first introduces declassification. Similar work is by [11] also studying noninterference for distribution and mobility for Boxed Ambients. [12] deviates from the applied pi calculus generality focusing on core abstractions for security in distributed systems, like secure channels [13]. In common with these works are modeling distributed system by a calculus but none of the pi calculus related work focuses on active objects while we do not consider cryptographic primitives. An interesting perspective would be to investigate the relationship between confinement and effects of cryptographic primitives. We also use a bisimulation-based equivalence relation to express noninterference. In the applied pi calculus, for example, the notion of a static equivalence, similar to our indistinguishability is used in addition to observational equivalence that corresponds to our notion of noninterference (see e.g. [18]).
This work presented a formal framework for the security of active objects based on the semi-lattice security model that propagates confinement. We presented a safe security type system, that verifies the confinement property and is sound, i.e., checks security, with respect to a dedicated formal notion of noninterference, or more generally, multi-lateral security. ASP fun makes secure down-calls possible and is still applicable bi-directionally as illustrated by implementing the NSPK protocol. The proofs have been in large parts formalized in Isabelle/HOL. An implementation of functional active objects is given by Erlang Active Objects in [22] also providing a simple extension by a run-time monitor for confinement [23].

if([])
In the third line above, this, y / ∈ F V (c) ∪ F V (d); [] denotes the empty object. The definition shows how -similar to λcalculus -the functionality of the constructor is encoded in the elements of the datatype: when b is true its method if delegates to the method then, filled with term c, when false, if delegates to else, executing term d.
Typing of the if-then-else construct is a base test for an information flow type system as this construct is the basic example that gives rise to implicit information flows. We will thus here illustrate how the security type rules presented in this paper establish that the guard of the if-then-else construct, the if, must be typed with the same P C as the branches, i.e., then and else. Then it immediately follows that if the method if has H-P C then the branches must have H-P C as well. The reasoning instantiates type rules showing the constraints that follow for the security assignment in the security type Σ ifte .
A condition b in the method if of an if-then-else object evaluates to either true or f alse. We consider those two possibilities and infer their types and the resulting constraints.
To type true, we initially type this which can be done only by rule VAL SELF leading to the following typing where Σ ifte = (A ifte , δ ifte ) is the security type for the if-then-else object and M ifte = {A ifte (if), A ifte (then), A ifte (else)}. We use the arbitrary set of additional type assumptions T provided by the rule to integrate the type assumption for y already here. It is needed further down for typing the object but only formally. We then apply the rule TYPE CALL to infer a type for this.then. The following instance of that rule sets the parameters such that it can be applied to the previous INSTANCE VAL SELF. INSTANCE Since for an arbitrary if-then-else guard b we have to allow both values true and false as possible outcome we have to combine the constraints and conclude for A ifte the following overall constraint.
The update of the methods then and else does not change the P C and thus preserves the security assignment and the constraints. This constraint is what we expect for information flow security. If the guard of an if-then-else can only be typed in a H-P C then its branches must also be "lifted" to H. Only if the guard can be typed in a L-P C, can the branches also be typed in L-P C.

Note on typing constants
In the above type rule instances we have used typings for constants, for example, the empty object [] as granted and did not refine them any further. commonly used plain objects) can more practically be considered as activities without any H methods that are included as a "data base" in a configuration. Then, an occurrence of [] is literally the activity named "empty object", i.e., [] is an activity reference. For the typing, the natural type of the empty object is given as the empty security assignment ∅, i.e., the partial function that is undefined for all inputs, and the bottom element ⊥ of the visibility semi-lattice which corresponds to the empty set of activity names. A similar type and subtyping argumentation goes for other constants, for example 0 or 1, used in the running example. Similar to Church numerals simple term representation can be given to them in ASP fun . Such constant activities η must have their methods all assigned to L, i.e., their security assignment A maps all method names of η to L. Then the P C of the activity η is also L because it is given as {L} according to the rule TYPE CONFIGURATION. The global level of a constant activity like η is defined as the set {eta}. If the constant η is used by referencing it in other activities of the configuration, the name η becomes part of the other activities' global levels.

B: Running Example -Details on Typing
The following shows why the example configuration presented as running example cannot be typed with income → H ∈ sec.

Implementation
The quicksort function is described in Section II-C. The manager activity that controls the ordering of a list and the sorting object χ that calls the ord method in β-objects are repeated here for convenience of the reader. The extended method ord that bears a dependency between ord and income, β ∅, [ord = ς(y) if this.income/10 3 ≥1 then 1 else 0], income = . . .
is not typeable for any security assignment sec that imposes the constraint that method income → H. The following type inference elaborates that the type system rejects any security assignment that contains the constraint income → H. It illustrates how the security assignment A βi = ass(Γ act (β i )) is inferred.
Typing remote call implies ord → L Since the method ord is called remotely in χ via α we need that ord → L, which cannot be possible because of the dependency in the above implementation. To be able to type the call to β i .ord in the object χ this method must be an L-method according to TYPE CALL and GLOB SUB-SUMPTION. More precisely, let (A βi , δ βi ) = Γ act (β i ). We have that Γ act , Γ fut , sec , ∅; M βi β i : (A βi , δ βi ) because of TYPE ACTIVE OBJECT REFERENCE and β i ∈ dom(C). The P C is M βi = j∈domA β i A βi (j) where A βi needs to be inferred in the process. We can use next TYPE CALL to type Γ act , Γ fut , sec , ∅; A βi (ord) β i .ord : (A βi , δ βi ). However, to type β i .ord in the context of the object χ it needs to be typed as (A βi , δ χ ) with global type component δ χ . This upgrading of the call can only be achieved by application of rule GLOB SUBSUMPTION which requires that δ βi δ χ which is true but also requires that the P C of the typing Γ act , Γ fut , sec , ∅; A βi (ord) β i .ord : (A βi , δ βi ), i.e., A βi (ord), is L.
Typing β i .ord at global level δ βi only with H-P C The next part of the argument states that the only type that can be inferred for a call β i .ord is T ; H β i .ord : (A βi , δ βi ), i.e., with H-P C. This is because types for calls can only be inferred by rule CALL and A βi (ord) = H which coerces the P C according to rule CALL to H. For clarity of the exposition, we omit in the following Γ act , Γ fut , sec in front of the typings. Since we want to arrive at C : Γ act , Γ fut , sec , by the inversion principle of inductive type definitions, all provisos of TYPE CONFIGURATION have to be true. Since The only way to arrive at a type for t ord is by an application of TYPE CALL as in the following instance.

INSTANCE TYPE CALL
T ; S (((true.if := (this. > 0(this.div 10 3 (this.income)))) .then := 1).else:= 0) : In order to match the conclusion of the above with the first proviso of the earlier INSTANCE TYPE OBJECT, the security assignment of ord is coerced to that of if We only need to show that A βi (if) = H and we are finished.

Typing implies that A βi (if) = H
The following chain of steps shows how a type for the body of ord and thus A βi (ord) must be inferred detailing how the security assignment parameter A βi (if) needs to be instantiated to H. The chain of reasoning starts from the one specified security assignment income → H in sec and shows that then also ord → H which contradicts the above ord → L. Hence, no type can exist with the constraint income → H for this configuration.
A βi (income) is H by constraint on sec and thus A βi . According to VAL SELF with M = {A βi (income), A βi (ord), . . .} = H we get the following typing for this.
T ; H this : (A βi , δ βi ) According to TYPE CALL, this.income is typeable only with P C as H since {income → H} ⊆ A βi .
T ; H this.income : (A βi , δ βi ) The previous typing feeds into rule TYPE CALL again but this time for the parameter t. Since the P C S j matches with H we get again a H-P C coercing the method div 10 3 also to be assigned to H in A βi .

INSTANCE TYPE UPDATE
∅; S true : (A β i , δ β i ) this : Σ β i :: [] : Σ β i :: ∅; A β i (if) this. > 0(this.div 10 3 (this.income)) : The first proviso, the typing for true can be inferred as shown in the previous section, using rule SEC ASS SUBSUMP-TION in addition to embed it into β i . We spell out some portion of M βi = {A βi (if), . . . } above to emphasize that A βi (if) is part of the P C. The dots stand for A βi (ord) and A βi (income) etc. To match the previous derivation above of ∅; H this. > 0(this.div 10 3 (this.income)) : (A βi , δ βi ) to the second proviso in the above instance it is necessary to coerce We are finished here already because we have already shown above that A βi (if) = A βi (ord) which thus is H contradicting the earlier requirement to be L.
For completeness, we continue the derivation of the body of t ord . From the previous step above, we get the conclusion ∅; H true.if[] := this. > 0(this.div 10 3 (this.income)) : (A β i , δ β i ).

Running Example: Typing Summary
The coercions revealed in the above steps determine the parameter A βi in summary as follows.
I.e., the only possible instantiation for A βi (ord) is H. We cannot meet the required constraint A βi (ord) = L necessary to call it from the outside in χ as explained initially. Therefore, the example configuration cannot be typed with the constraint income → H.

Borderline Example for Confinement
The confinement property states that remote calls can only be addressed to L methods. But does this simple security property guarantee that no hidden H methods can be returned with the reply to such a call? Consider the following example α[∅, [leak = ς(y)this, key = ς(y)n]] where n is an integer representing a secret key. Let the security assignment for α be {leak → L, key → H}. One may think that an activity β could contain a call α.leak.key since the method leak is L enabling the remote call to α.leak. Once the call result is returned into β, it would evaluate to the active object of α inside β (since this represents this active object of α). Since we are now already in β, it might seem possible to apply the method key to extract the key.
How does the security type system prevent this? Since the typing for this inside α is only possible with the P C as M α = i∈{leak, key} A α (i) = H (since A α (key) = H), the typing for this yields INSTANCE VAL SELF this : Σα :: T ; H this : (Aα, δα).
Typing the object α must use the following instance of TYPE OBJECT. Now, matching the instance of VAL SELF for this with the first proviso of the instance of TYPE OBJECT coerces A α (leak) to H contradicting the initial specification. I.e., the method leak is forced to be H and cannot be called remotely.

C: Formal Semantics of ASP fun
For a concise representation of the operational semantics, we define contexts as expressions with a single hole (•). A context E[t] denotes the term obtained by replacing the single hole by t.
This notion of context is used in the formal semantics of ASP fun in Table V and also in the definition of visibility (see Definition 3.1).

D: Indistinguishability
In ASP fun active objects are created by activation, futures by method calls. Names of active objects and futures may differ in evaluations of the same configuration but this does not convey any information to the attacker. In order to express the resulting structural equivalence, we use typed bijections like [7] that enable the definition of an isomorphism of configurations necessary to define indistinguishability. This technique of using the existence of "partial bijections" to define an isomorphism between configurations only serves to express equality of visible parts but is rather technical as it needs to provide differently typed bijections for the involved structure, e.g. futures, objects, and request lists. Definition 6.1 (Typed Bijection): A typed bijection is a finite partial function σ on activities α (or futures f k respectively) such that for a type T ∀ a : dom(σ).
By t[σ, τ ] =| sec t we denote the equality of terms up to replacing all occurrences of activity names α or futures f k by their counterparts τ (α) or σ(f k ), respectively, restricted to the label names in sec, i.e., in the object terms t and t we exempt those parts of the objects that are private. The local reduction with → * ς of a term t to a value t e (again up to future and activity references) is written as t ⇓ t e . Definition 6.2 (Equality up to Name Isomorphism): An equality up to name isomorphism is a family of equivalence relations on ASP fun terms indexed by two typed bijections (σ, τ ) := R and security assignment sec consisting of the following differently typed sub-relations; the sub-relation's types are indicated by the naming convention: t for ς-terms, α, β for active objects, f k , f j for futures, Q α , Q β for request queues.
Such an equivalence relation defined by two typed bijections σ and τ may exist between given sets V 0 , V 1 of active object names in C, C 1 . If V, V 1 correspond to the viewpoints of attacker α in C and its counterpart in C 1 we call this equivalence relation indistinguishability.
In the following, we use the visibility range based on Definition 3.1 as V I sec (α, C) ≡ {β ∈ dom(C) | β sec C α}. Definition 6.3 (Indistinguishability): Let C, C 1 be arbitrary configurations, well-typed with respect to a security specification sec, active object α ∈ dom(C) and α ∈ dom(C 1 ) (we exempt α from renaming for simplicity). Configurations C and C 1 are called indistinguishable with respect to α and sec, if α's visibility ranges are the same in both up to name CALL l i ∈ {l j } j∈1..n E [l j = ς(y j )s j ] j∈1..n .l i (t) →ς E s i {this ← [l j = ς(y j )s j ] j∈1..n , y i ← t} UPDATE l i ∈ {l j } j∈1..n E [l j = ς(y j )s j ] j∈1..n .l i := ς(y)t →ς E [l i = ς(y)t, l j = ς(y j )s  C ∼ α C 1 ≡ ∃ σ, τ.     V I sec (α, C) = dom(σ) V I sec (α, C 1 ) = ran (σ) ∀ β ∈ V I sec (α, C). C(β) = σ,τ C 1 (σ(β))     As an example for α-indistinguishable configurations consider the running example. In the original (non-fallacious) form, β 1 .income could be 42 in configuration C and it could be 1042 in configuration C . Since income is specified as H those two configurations can be considered as α-indistinguishable (with respect to β 1 ). Attacker α sees no difference between the two. In the fallacious example, however, he'd notice a difference when calling the quicksort algorithm that implicitly drafts information from income through ord: here, C and C would be distinguishable since β 1 .ord is 0 in C and 1 in C .
Proof: Let C 1 be another arbitrary but fixed configuration such that C ∼ α C 1 . This means that for any β ∈ dom(C), if β ∈ visibility range of α -we have that σ(β) ∈ dom(C 1 ) and C(β) = σ,τ C 1 (σ(β)) for some τ and σ. That is, aside differently named futures (and active object references) these two activities are structurally the same and contain the same values. For the sake of clarity of the proof exposition, we leave the naming isomorphism implicit, i.e. use the same names, e.g., β, f k , for both sides, i.e., for β, σ(β) and f k , τ (f k ). Note, that the type of the configurations C and C 1 is in some cases an extension to the types of C and C 1 , as described in the Preservation Theorem 1.
The proof is an induction over the reduction relation combined with a case analysis whether an arbitrary β ∈ dom(C) is in the visibility range of α or not.
If, for the first case, C → C by some reduction according to the semantics rules in the part of the configuration that is not visible to α, then for most cases trivially no change becomes visible by the transition to C : for any local reduction, this is the case since the visibility relation is unchanged. Hence, "invisible" objects remain invisible. If C ∼ α C 1 and C → C , then also C ∼ α C 1 whereby we trivially have the conclusion since C 1 → * C 1 (in zero steps). This observation is less trivial for the rules REQUEST and ACTIVE where new elements, futures and activities, respectively, are created. In the case of REQUEST, let β[f k → E[γ.l(t)] :: Q β , t β ] ∈ C and γ[Q β , t γ ] ∈ C with β in the α-invisible part and γ visible to α. The fact that there are no side effects provides that request f m created in the request step in γ, i.e. γ[f m → t γ .l(t) :: Q γ , t γ ] in C , is not α-visible.
Similarly, if a new activity γ is created from a method in β according to ACTIVE, then γ will not be in the visibility range of α since β was not visible to α by Definition 3.1 of the visibility relation. Thus, for β not visible to α, if C ∼ α C 1 and C → C , then also C ∼ α C 1 and the conclusion holds again because C 1 → * C 1 . This closes the case of non-α-visible reductions.
If C → C by some reduction in the α-visible part, we need to consider all cases individually as given by the induction according to the semantics rules.
If C → C by semantics rule REQUEST then C must have contained β[f k → E[γ.l(t)] :: Q β , t β ] and γ[Q γ , t γ ] for some β, f k , and γ. Hence, β[f k → E[f m ] :: Q β , t β ] and γ[f m → t γ .l(t) :: Q γ , t γ ] in C for some new future f m . Since β is α-visible, so is γ by definition of visibility (since f k was created from t β , t β must have an L-method containing γ). By confinement, f m has global level δ β and l is L. Since C ∼ α C 1 , and β, f k , γ visible to α, we have (up to isomorphism of names) that β[f k → E[γ.l(t)] :: Q β , t β ] and γ[Q γ , t γ ] in C 1 . Therefore, we can equally apply the rule REQUEST to C 1 to obtain that C 1 contains β[f k → E[f m ] :: Q β , t β ] and γ[f m → t γ .l(t) :: Q γ , t γ ]. In C 1 , γ is also α-invisible and l is typed L as well. Now, the α-visible parts of C 1 are equal to the ones of C apart from the new future f m . However, based on the future bijection τ that exists due to indistinguishability between C and C 1 we can extend this for f m to a bijection τ . In addition, by preservation, C as well as C 1 are well-typed whereby finally C ∼ α C 1 and this finishes the REQUEST-case. Another, also less obvious case for new elements in the αvisible part, is the one for ACTIVE. However, here we have a very similar situation as in the REQUEST case. If, in C, there is some β[f k → E[Active(t)] :: Q β , t β ], we also have β alike in C 1 , whereby we get in the next step -according to rule ACTIVE -β[f k → E[γ] :: Q β , t β ] γ[∅, t] in C replacing the previous β. We can also apply ACTIVE in C 1 so that β[f k → E[γ] :: Q β , t β ] γ[∅, t] in C and C 1 as well instead of just the old β. Indistinguishability is preserved since a bijection σ exists as extension to σ to the new activity γ and by preservation again C and C 1 remain well-typed. We are finished with the case for ACTIVE since C ∼ α C 1 .
The other cases, corresponding to the remaining semantics rules, are of very a similar nature. Thus, the second part of α-visible parts of the configurations C and C 1 is also finished and completes the proof of the theorem.