‹header›
‹date/time›
Click to edit Master text styles
Second level
Third level
Fourth level
Fifth level
‹footer›
‹#›
Our model enables us to categorize attacks according to which model components get attacked, thus creating a checklist for devs and testers to use to validate the security of their programs.
Attack color code: black – channel; red – isolation; green – security administration (policy)
The host and other things relied upon (e.g. hardware, crypto) work correctly
 – the red arrows show possible attack points on the isolation mechanism that can lead to isolation failures
- the black arrows show possible attacks points that can lead to program failures.
The program knows about all allowed input channels
It’s up to the program to handle all inputs correctly
Attacks
 Both a crypto protocol stack and the guard filter traffic, ruling out some attacks.
 The remaining attack points are shown here:
Packet handling code exposed to all sources
Crypto stack exposed to most sources
Packet handling code exposed to crypto-authorized sources
Guard code exposed to crypto authorized sources
Internal app code exposed to sources passed by the guard
Accountability is the ability to hold an entity, such as a person or organization, responsible for its actions.
Accountability is not the opposite of anonymity or the same as total loss of privacy. The degree of accountability is negotiated between the parties involved, as in Infocard, for example; if there’s no agreement, then nothing is disclosed and they stop interacting. In other words, the sender chooses how accountable he wants to appear and the recipient chooses the level of acceptable accountability. If the sender is not accountable-enough for the recipient, then the interaction ends with nothing disclosed on either side.
Accountability requires a consistent identifier based upon a name, a pseudonym or a set of attributes. When the identifier is based upon a name, the recipient may use a reputation service to determine whether the sender is accountable enough. Should the sender behave unacceptably, then the recipient can “punish” the sender by reducing the sender’s reputation.
When the identifier is a pseudonym, it must be issued by an indirection service which knows the true identity of the sender. When the sender behaves unacceptably, the indirection service may be requested to reveal the real-world identity to appropriate authorities by those authorities.
A set of attributes being used as the identifier requires a certificate, or other claims mechanism, from a trusted authority. When the sender behaves unacceptably or the claimed attributes are proved to be false, then the trusted authority may be contacted and asked to “punish” the sender by removing him from the trusted authority’s list. Alternatively, the recipient may choose to remove the trusted authority as not being accountable-enough.
Becoming accountable does not necessarily mean disclosing anything about your real-world identity thus protecting privacy.
Using accountability as a mechanism for receiving network packets is much more difficult. Since there is no end node, packets pass through nodes having no direct relation to the sender, and the per-packet cost of accountability verification must be very small to not impact network performance. This makes checking accountability for network access very difficult.
Accountability is the ability to hold an entity, such as a person or organization, responsible for its actions.
Accountability is not the opposite of anonymity or the same as total loss of privacy. The degree of accountability is negotiated between the parties involved, as in Infocard, for example; if there’s no agreement, then nothing is disclosed and they stop interacting. In other words, the sender chooses how accountable he wants to appear and the recipient chooses the level of acceptable accountability. If the sender is not accountable-enough for the recipient, then the interaction ends with nothing disclosed on either side.
Accountability requires a consistent identifier based upon a name, a pseudonym or a set of attributes. When the identifier is based upon a name, the recipient may use a reputation service to determine whether the sender is accountable enough. Should the sender behave unacceptably, then the recipient can “punish” the sender by reducing the sender’s reputation.
When the identifier is a pseudonym, it must be issued by an indirection service which knows the true identity of the sender. When the sender behaves unacceptably, the indirection service may be requested to reveal the real-world identity to appropriate authorities by those authorities.
A set of attributes being used as the identifier requires a certificate, or other claims mechanism, from a trusted authority. When the sender behaves unacceptably or the claimed attributes are proved to be false, then the trusted authority may be contacted and asked to “punish” the sender by removing him from the trusted authority’s list. Alternatively, the recipient may choose to remove the trusted authority as not being accountable-enough.
Becoming accountable does not necessarily mean disclosing anything about your real-world identity thus protecting privacy.
Using accountability as a mechanism for receiving network packets is much more difficult. Since there is no end node, packets pass through nodes having no direct relation to the sender, and the per-packet cost of accountability verification must be very small to not impact network performance. This makes checking accountability for network access very difficult.
Red-Green is our name for the creation of two different environments for each user in which to do their computing. One environment is carefully managed to keep out attacks – code and data are only allowed if of known trusted origin – because we know that the implementation will have flaws and that ordinary users trust things that they shouldn’t. This is the “Green” environment; important data is kept in it. But because lots of work, and lots of entertainment, requires accessing things on the Internet about which little is known, or is even feasible to know, regarding their trustworthiness (so it can not be carefully managed), we need to provide an environment in which this can be done – this is the “Red” environment. The Green environment backs up both environments, and when some bug or user error causes the Red environment to become corrupt, it is restored to a previous state (see the recovery slide); this may entail loss of data, which is why important data is kept on the Green side, where it is less likely to be lost. Isolation between the two environments is enforced using IPsec.
The big unknown is the user experience, at this point. We know different models:
1.The KVM switch model (as in NetTop)
2.The X-Windows model (with windows actually fronting for an execution on some other machine)
What the users will find preferable is an open question, still.
More to the point, the security of the system depends on separation that will be visible to the user.  There will be things that the user will not be allowed to do because of the security policy.  What also needs to be researched is the best way to communicate that separation to the user. The KVM switch model, in which the user envisions two separate PCs that have to use network shares to share files might be the simplest to grasp.
Implementation is still a matter of debate within the company – and even within the team.
Today we are going to talk about the VM-based isolation solution
Almost everyone has a “red” machine
Security settings optimized for immediate productivity not long-term security
So… The untrustworthy or unaccountable get to interact with important assets
Downloaded content through IE, Messenger, p2p, etc should be tagged with download source information – similar to IE zones.  As the content moves through the “airlock”, the tags should move as well. Auditing – sync virus checker.  Mention the attachment execution services (AES) check
Must keep:
Important data
Attackers
On different sides of a VM isolation boundary
Partition network as shown
Apply stricter security settings in Green
Software restriction policies
Restrict user admin privileges …
We think this works pretty well for RG in enterprises
But we don’t know how to do it for home users…