[日期上一页][下一个日期][线程上一页][线程下][日期索引][线程索引]

CVE-10K问题



所有的,那就是时间。2006年到目前为止,我们已经近7000分配CVE标识符。我们没有100%的完整性,但我想说,对于通常的来源(主要vuln DBs,供应商报告,Bugtraq等等)可能有另一个100年到1000年CVE的2006年。考虑到脆弱的持续增长的趋势,这是一个现实的可能性,在2007年,我们的风险分配9999 CVE的问题。如何处理10000个条目CVE-10K问题。这里有一些可能的解决方案。感谢反馈。我们可以讨论这个话题在即将到来的telecon,。1)保持和移动hex-based……cve - 2007 - 9999会去cve - 2007 a000等等。问题:可能会打破很多应用假设数字只。 Benefit: we could handle 65,000 ID's in a single year. 2) Completely randomize the year portion. We've considered this for a number of reasons, because too many people make assumptions based on the year portion of the ID already - sometimes it's date of disclosure, sometimes it's date of assignment, sometimes it's because of a typo from an authoritative source. Randomization would help in some other ways, too. This is the most radical approach but has some strengths. Problem: any crude usability is lost. Benefit: the possible space of 100 million identifiers allows us to pass the problem onto the next generation :) but also might allow for less tightly controlled allocation of CVE's (although reduced control has serious negative consequences on CVE-based quantitative analyses and maintenance costs, so this is only a possibility). 3) Adding 1000 to the year. Benefit: introduces predictability, and it's one of the least radical approaches. It buys us some time. Problem: only increases to 20,000 identifiers in a year. Bigger problem: the identifier is likely to be thought of as a typo by many readers, and automatically "corrected" to the current year, which would be an identifier for the wrong issue. 4) Keeping the year, and extending the numeric portion to 5 digits. Benefit: this preserves the CRUDE utility of the year portion and doesn't introduce any alphabetic characters. Problem: some tools/products/databases might assume only 8 total digits instead of 9, so one digit could get lopped off. Maintenance costs would be greater than #2 and #3. It also might affect sorting, but in the grand scheme of things, I'm less concerned than I used to be. Handling over, say, 20K issues in a year would likely require a paradigm shift within the entire vulnerability information management industry. As Dave Mann has pointed out to me numerous times, the growth in the number of vulns is outpacing the growth in CVE funding, which has been mostly flat with respect to content generation itself, with increasing risks of our funding actually being reduced (I don't think most people understand why good vulnerability information isn't cheap.) Anyway, I suspect that this growth problem is hurting other vuln databases/products, too. We're already seeing some of that paradigm shift; the Board gave up voting a while ago due to the amount of effort, you're seeing more generic vulnerability database entries with more mistakes (probably being made by less experienced analysts with less editorial oversight), the percentage of verified issues is probably smaller, etc. Thoughts? - Steve P.S. Thanks to Pascal Meunier for asking about this privately, which prompted me to mention it here.

页面最后更新或审查:2007年5月22日,