First, to answer your question: The security system is designed to protect GOOD USERS from BAD CODE it is explicitly not designed to protect GOOD CODE from BAD USERS Your access restrictions mitigate attacks on your users by partially trusted hostile code They do not mitigate attacks on your code from hostile users If the threat is hostile users getting your code, then you have a big problem. The security system does not mitigate that threat at all Second, to address some of the previous answers: understanding the full relationship between reflection and security requires careful attention to detail and a good understanding of the details of the CAS system. The previously posted answers which state that there is no connection between security and access because of reflection are misleading and wrong Yes, reflection allows you to override "visibility" restrictions (sometimes).
That does not imply that there is no connection between access and security. The connection is that the right to use reflection to override access restrictions is deeply connected to the CAS system in multiple ways First off, in order to do so arbitrarily code must be granted private reflection permission by the CAS system. This is typically only granted to fully trusted code, which, after all, could already do anything Second, in the new .
NET security model, suppose assembly A is granted a superset of the grant set of assembly B by the CAS system. In this scenario, code in assembly A is allowed to use reflection to observe B's internals Third, things get really quite complicated when you throw in dynamically generated code into the mix. An explanation of how "Skip Visibility" vs "Restricted Skip Visibility" works, and how they change the interactions between reflection, access control, and the security system in scenarios where code is being spit at runtime would take me more time and space than I have available.
See Shawn Farkas's blog if you need details.
First, to answer your question: The security system is designed to protect GOOD USERS from BAD CODE; it is explicitly not designed to protect GOOD CODE from BAD USERS. Your access restrictions mitigate attacks on your users by partially trusted hostile code. They do not mitigate attacks on your code from hostile users.
If the threat is hostile users getting your code, then you have a big problem. The security system does not mitigate that threat at all. Second, to address some of the previous answers: understanding the full relationship between reflection and security requires careful attention to detail and a good understanding of the details of the CAS system.
The previously posted answers which state that there is no connection between security and access because of reflection are misleading and wrong. Yes, reflection allows you to override "visibility" restrictions (sometimes). That does not imply that there is no connection between access and security.
The connection is that the right to use reflection to override access restrictions is deeply connected to the CAS system in multiple ways. First off, in order to do so arbitrarily, code must be granted private reflection permission by the CAS system. This is typically only granted to fully trusted code, which, after all, could already do anything.
Second, in the new . NET security model, suppose assembly A is granted a superset of the grant set of assembly B by the CAS system. In this scenario, code in assembly A is allowed to use reflection to observe B's internals.
Third, things get really quite complicated when you throw in dynamically generated code into the mix. An explanation of how "Skip Visibility" vs "Restricted Skip Visibility" works, and how they change the interactions between reflection, access control, and the security system in scenarios where code is being spit at runtime would take me more time and space than I have available. See Shawn Farkas's blog if you need details.
1 If I understand correctly, then code with restrictive private/sealed/readonly really is more secure if one is running the potentially hostile code in say, medium trust. – MatthewMartin May 21 '09 at 17:21 1 Correct. Of course, if the hostile code is fully trusted then it is already game over, man.
Hostile full trust code can just do whatever bad thing it wants. Full trust means full trust! – Eric Lippert May 21 '09 at 17:58.
Access modifiers aren't about security, but good design. Proper access levels for classes and methods drives/enforces good design principles. Reflection should, ideally, only be used when the convenience of using it provides more utility than the cost of violating (if there is one) best design practices.
Sealing classes only serves the purpose of preventing developers from extending your class and "breaking" it's functionality. There are different opinions on the utility of sealing classes, but since I do TDD and it's hard to mock a sealed class, I avoid it as much as possible. If you want security, you need to follow coding practices that prevent the bad guys from getting in and/or protect confidential information from inspection even if a break in occurs.
Intrusion prevention, intrusion detection, encryption, auditing, etc. Are some of the tools that you need to employ to secure your application. Setting up restrictive access modifiers and sealing classes has little to do with application security, IMO.
No. These have nothing to do with security. Reflection breaks them all.
1 This is misleading. See my answer for details. – Eric Lippert May 21 '09 at 16:10 Eric, the OP did stipulate, "as long as you have access to Reflection".
Are there situations where code might have access to Reflection, yet be prevented from accessing private, internal, protected or protected internal members and types? – John Saunders May 21 '09 at 19:47 Yes, but you will almost certainly not run into them unless you are a compiler writer writing a compiler that works with silverlight. There are obscure scenarios involving silverlight, dynamically generated code from expression trees, and compiler-generated closure classes.
We ended up making minor changes to the silverlight security system to work around the issues, but there are still potential problems there that will hopefully be addressed in future versions. – Eric Lippert May 21 '09 at 20:13.
Regarding the comments about reflection and security - consider that there are many internal types + members in mscorlib. Dll that call into native Windows functions and can potentially lead to badness if a malicious application uses reflection to call into them. This isn't necessarily a problem since untrusted applications normally aren't granted these permissions by the runtime.
This (and a few declarative security checks) is how the mscorlib. Dll binary can expose its types to all manner of trusted and untrusted code, yet the untrusted code can't get around the public API. This is really just scratching the surface of the reflection + security issue, but hopefully it's enough information to lead you down the right path.
I always try to lock things down to the minimal access required. Like tvanfosson stated, it's really about design more than security. For example, I'll make an interface public, and my implementations internal, and then a public factory class/methods to get the implementations.
This pretty much forces the consumers to always type it as the interface, and not the implementation. That being said, a developer could use reflection to actually instantiate a new instance of an implementation type. There's nothing stopping him/her.
However, I can rest knowing that I made it at least somewhat difficult to violate the design.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.