(日期:][下一个日期][线程:][线程下][日期索引][线程索引]

再保险:临时决定:接受5 SA类别候选人最后(9/28)



Spaf说:>我认为(手指和rusers)甚至不曝光>描述。问题下面的需要是真实的:> 1)服务需要访问malfeasor(外部或内部>)很明显,它是依赖于特定的环境,是否这是真的。网络和系统配置要求。还有一个谁/什么问题应该被视为一个malfeasor。最敏感,限制性的环境可能假设malfeasor与无限的资源与能力,而一个完全开放的环境可能不(或者,愿意接受成本/收益比率)。> 2)服务需要响应请求从malfeasor >正确的,有用的信息。这是手指的预期行为,假设malfeasor访问(1)中指定。> 3)系统上运行的服务是必须有一些其他>可以利用的漏洞。在手指或rusers的案例中,有几个重要的潜在* *问题,可以利用这可能导致妥协的一个系统。例如,一个用户的密码可能很容易猜到从手指信息如果密码是基于信息可以从手指。(工具至少早在撒旦所做的相当好的工作。) A related *potential* problem is a default password, or a null password. In these situations, finger information could be regarded as an exposure since it releases information that could be used as a stepping stone to a compromise. In a specific environment at a specific time, such a problem may truly exist, whether or not it is known or observable by a human. >4) The system needs to be accessible so that vulnerability can be >exploited. I agree, but as in (1), I think this is environment-specific. >I run a version of finger on my machine. It returns information that >may or may not be accurate. It may not respond to requests from some >hosts and domains. My machine is otherwise pretty tightly configured, >so people knowing that there is a user 'spaf' on my machine isn't a >problem (as if they couldn't guess that otherwise). I am basically >the only user on my machine. So, is "finger" still an exposure >because it is running? In this case, I wouldn't think of it as such. Finger on a well-configured machine that keeps up-to-date with all patches shouldn't be a problem. But if finger says a user "spaf" is on the system, and spaf's password is "spaf," then I'd say it's a problem. There are situations - and other policies - that would treat this as a significant concern, so CVE needs to recognize that. The CVE entry for finger wouldn't apply in your policy, but it could in someone else's. >And I won't even mention the policy problem again. :-) Interpreting the vulnerability or exposure in light of "Policy" is definitely a problem until we can collectively find a good way to effectively specify unambiguous policies. But I think it's more than policy. The interpretation of the particular security problem needs to be done in light of the specific state of the environment in which the bug/configuration is being observed, regardless of what the policy is - at least from an enterprise security perspective. An administrator might not think that the nastiest root-access buffer overflow is a problem, if the box only operates in single-user mode in an area that requires physical access by a small number of highly trusted individuals who authenticate through biometrics. Obviously this is an extreme example from an operational perspective, but the punchline is that as long as some vulnerability/exposure is considered such within the context of *some* reasonable security policy, then it should be included in CVE, so that CVE can be useful to a broad variety of policies and environments. There are some security policies that require disabling particular services because they are regarded as providing too much information, thus finger should be included in CVE. However, some CVE users may never have a need for that particular entry. - Steve

页面最后更新或审查:2007年5月22日,