Some Eclipse Foundation services are deprecated, or will be soon. Please ensure you've read this important communication.
Bug 337006 - [Security] Disclosure
Summary: [Security] Disclosure
Status: RESOLVED FIXED
Alias: None
Product: Community
Classification: Eclipse Foundation
Component: Architecture Council (show other bugs)
Version: unspecified   Edit
Hardware: PC Linux
: P3 normal (vote)
Target Milestone: ---   Edit
Assignee: eclipse.org-architecture-council CLA
QA Contact:
URL: http://eclipse.org/security/
Whiteboard:
Keywords:
Depends on:
Blocks: 337004
  Show dependency tree
 
Reported: 2011-02-11 15:20 EST by Wayne Beaton CLA
Modified: 2013-12-18 16:09 EST (History)
10 users (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Wayne Beaton CLA 2011-02-11 15:20:49 EST
How/when do we disclose security issues?

One important consideration is that would-be evil-doers can use the disclosed vulnerability in an attack against unpatched installations. But people can't protect themselves against attack if they don't know about the problem and fix. Of course, this sort of discussion ends up being very circular. Ultimately, I believe that security issues need to be reported to the general population.

As discussed in Bug 337005, we have the ability to mark Bugzillas as "committers-only". When do we turn off this flag?

The Bugzilla Project uses a progressive disclosure strategy to disclose. The team contacts the people they know and trust who are using Bugzilla and inform them (by adding them to the cc list of the bug) of the issue and invite them to apply the patch. Those folks who are in the "circle of trust" can themselves invite in others that they know. It happens very organically.

Should we explore something similar?

When a fix is available, the project can add known and trusted adopters and users to the conversation. Once everybody has had sufficient opportunity to apply the patches and prepare, the rest of the world can be informed.

We might also consider a disclosure to the whole membership ahead of open disclosure.

What do we do if a patch is just taking too long, or simply cannot be produced? I think that we should leave it to the project's discretion (they know their community best) to decide if early disclosure makes sense (i.e. allow consumers to prepare).

Opening up the bug (turning off the "committers-only" flag) is a sort of quiet disclosure; nobody new will find out about the bug unless they come across it as part of a search. How should we inform the broader community? Tweet? RSS? Wiki Page? Blog? 

It's probably a good idea to have a single place on eclipse.org where all vulnerabilities can be discovered. Or is that a good idea?
Comment 1 John Arthorne CLA 2011-02-25 13:36:41 EST
An interesting reference:

http://www.mozilla.org/projects/security/security-bugs-policy.html

In short, all the technical discussion happens in a bugzilla report with the security group set. Through consensus discussion in the bug report they decide if and when a disclosure be made to a central place, and how much detail to disclose:

http://www.mozilla.org/security/known-vulnerabilities/

Within the bug they also agree on a time to make the bug report itself public. If they fail to reach agreement on disclosure, the decision is escalated (to Mozilla staff, or I suspect EF staff in our case).

I like that general policy - it lets the "community" make most of the decisions but has a process for escalating. Since there is such a vast range of size and impact in security bugs, it would be hard to have a "one size fits all" policy.
Comment 2 Glyn Normington CLA 2011-02-28 11:47:52 EST
SpringSource has a fairly sane approach which Eclipse could learn from. There is a single web page [1] for security vulnerability information. CVE [2] is used to uniquely identify vulnerabilities, which is useful if, for example, a single vulnerability impacts a number of projects.

I think the basic idea should be to keep the "committers-only" flag switched on until either a fix or workaround is published and available or it turns out nothing can be done (although I don't know of any such examples). I suspect using CVE would tend to push Eclipse towards best practice.

[1] http://www.springsource.com/security
[2] http://cve.mitre.org/
Comment 3 Wayne Beaton CLA 2011-03-17 09:45:01 EDT
Another disclosure vector for consideration:

National Vulnerability Database
http://nvd.nist.gov/home.cfm
Comment 4 Wayne Beaton CLA 2011-04-08 13:46:18 EDT
I have a really hard time with the idea that bugs could remained closed for all time. I feel that there should be some kind of limit on how long a resolved bug can stay hidden.

Does it make sense to automatically remove the committer-only flag after three months? Six months? A year?

FWIW, we currently have 17 resolved bugs that are still hidden from public view (sorry public viewers).

https://bugs.eclipse.org/bugs/buglist.cgi?query_format=advanced;field0-0-0=bug_group;bug_status=RESOLVED;type0-0-0=equals;value0-0-0=Security_Advisories
Comment 5 Denis Roy CLA 2011-04-08 14:17:30 EDT
> bugs could remained closed for all time.

FIXED bugs, related to a security vulnerability, for which there is a patch, a workaround or a fix, should be reopened as quickly as possible. Users and administrators must made aware that a vulnerability exists so they can assess risk, and take the appropriate action to protect their users, servers and systems from potential exploit.

To do otherwise is, IMHO, irresponsible and unlike the open source way.
Comment 6 Wayne Beaton CLA 2011-04-08 14:33:39 EDT
Denis, do you feel that we should make a distinction between resolved and fixed?

What about acknowledging vulnerabilities that we won't/can't fix?
Comment 7 Denis Roy CLA 2011-04-08 15:29:23 EDT
> Denis, do you feel that we should make a distinction between resolved and
> fixed?

Only if there really is a distinction.  If there is a resolution (ie, disable a service, or implement a workaround) but no fix, I think we owe it to our community to advise them that there is a known vulnerability -- without necessarily disclosing the actual exploit, if there is one.  Submitting the vulnerability to CVE would likely be the best course of action here.

Again IMHO, the most important aspect is to disclose the vulnerability so that our users can protect themselves against an exploit or an attack.


> What about acknowledging vulnerabilities that we won't/can't fix?

If we can't, or won't fix it, perhaps someone else will.  But for that we need to acknowledge it.
Comment 8 Wayne Beaton CLA 2011-04-08 15:45:01 EDT
It just occurred to me that once the committer-only flag is removed, we don't have an easy way of identifying disclosed vulnerability bugs. Theoretically, we should be able to use the history information to identify bugs that have been unmarked (though I'm not sure how)

Should we consider creating a "vulnerability" flag or equivalent?

My main concern is that for a keyword to be successful, committers need to consistently remember to apply the keyword...
Comment 9 John Arthorne CLA 2011-04-08 16:04:53 EDT
(In reply to comment #8)
> It just occurred to me that once the committer-only flag is removed, we don't
> have an easy way of identifying disclosed vulnerability bugs. Theoretically, we
> should be able to use the history information to identify bugs that have been
> unmarked (though I'm not sure how)
> 
> Should we consider creating a "vulnerability" flag or equivalent?
> 
> My main concern is that for a keyword to be successful, committers need to
> consistently remember to apply the keyword...

We already have a "security" keyword. As you suggest the trick is making sure we apply it consistently.
Comment 10 Wayne Beaton CLA 2011-04-08 16:19:05 EDT
(In reply to comment #9)
> We already have a "security" keyword. As you suggest the trick is making sure
> we apply it consistently.

Ah. There it is. :-)
Comment 11 Wayne Beaton CLA 2011-04-21 14:25:07 EDT
I like the idea of automatically generating a Security Vulnerabilities Disclosures page and RSS feed.
 
We can pretty easily generate a page that displays all bugs that have the 'security' keyword, but are not currently marked committers_only. I think that we could also generate a page that displays all bugs that have had the flag enabled at some point in the past. I'm not sure I like generating based on history; at least with the 'security' keyword, we have a measure of control (a 'historical' query would include Bug 300500, and bugs that have been erroneously marked for example).  FWIW, we currently have three bugs in this state.

We should also be able to highlight bugs that have been resolved as WONTFIX.

So I'm thinking that we include a passage in the policy that states that resolved bugs related to security vulnerabilities be marked with the 'security' keyword. We should probably also include a recommendation to change the short description to something that will be meaningful to the reader in a listing.

Good idea? Bad idea?
Comment 12 John Arthorne CLA 2011-04-25 17:00:09 EDT
(In reply to comment #11)
> So I'm thinking that we include a passage in the policy that states that
> resolved bugs related to security vulnerabilities be marked with the 'security'
> keyword. We should probably also include a recommendation to change the short
> description to something that will be meaningful to the reader in a listing.
> 
> Good idea? Bad idea?

+1. Driving the list automatically from bugzilla is our best chance of actually keeping the list updated. Requiring the keyword be added is about the lowest impact requirement I can imagine.
Comment 13 Wayne Beaton CLA 2011-05-25 15:43:52 EDT
I have set up  page [1] that displays all resolved bugs marked with the 'security' keyword, but do not have the 'committer-only' flag (which is really a Group, but you know what I mean).

So far, we have one item on the list. There are several resolved bugs that I am attempting to goad the corresponding owners into disclosing.

As currently written, the security policy states that "vulnerabilities--regardless of state--must be disclosed to the community after a maximum three months." (I'll fix up those em dashes).

I need to decide if I'll just make my query smarter and automatically disclose, or EMO will take responsibility to go through the bugs and explicitly open the aged ones. Or we can change the policy before it goes into effect.

Thoughts?

[1] http://www.eclipse.org/security/known.php
Comment 14 John Arthorne CLA 2011-05-25 16:14:28 EDT
(In reply to comment #13)
> As currently written, the security policy states that
> "vulnerabilities--regardless of state--must be disclosed to the community after
> a maximum three months." (I'll fix up those em dashes).
> 
> I need to decide if I'll just make my query smarter and automatically disclose,
> or EMO will take responsibility to go through the bugs and explicitly open the
> aged ones. Or we can change the policy before it goes into effect.

Three months seems a bit arbitrary, but I see the policy leaves wiggle room for individual PMCs to decide what is appropriate. Generally I think disclosure should happen when a *release* is available containing the fix - requiring the consumer community to take unreleased (un-IP-reviewed, possibly untested) software can be harsh in some cases.


> [1] http://www.eclipse.org/security/known.php

Very useful. It would be nice if it could also scrape the "target milestone" so the audience can quickly see what version of the software contains the fix. See for example:

http://www.mozilla.org/security/known-vulnerabilities/
Comment 15 Wayne Beaton CLA 2011-05-25 16:28:45 EDT
Done.

Once we have a few more bugs showing up, I'll see what we need to do about sorting and grouping.
Comment 16 Wayne Beaton CLA 2011-05-25 16:39:30 EDT
(In reply to comment #14)

By "done" in my previous comment, I mean the part where we display the target milestone.

> Three months seems a bit arbitrary, but I see the policy leaves wiggle room for
> individual PMCs to decide what is appropriate. Generally I think disclosure
> should happen when a *release* is available containing the fix - requiring the
> consumer community to take unreleased (un-IP-reviewed, possibly untested)
> software can be harsh in some cases.

I can live with this. Mostly. I can make it part of the release process to identify the "committer-only" bugs resolved in the release.

I still think we need to deal with longevity. There are a lot of folks who seem to think that security vulnerabilities should remain closed forever. What do we do about bugs that aren't fixed in a release, or are never fixed?

I believe that expert wisdom suggests that even security vulnerabilities without fixes should be disclosed so that consumers can prepare themselves to mitigate risk.

http://www.schneier.com/essay-146.html
Comment 17 Jesse McConnell CLA 2011-05-25 17:14:54 EDT
Something that has come up a number of times in the rt-pmc calls has been what to do if there is some sort of critical vulnerability (or bug fix) in one of these durable snapshot in time p2 release repositories.

Take indigo for example, the general understanding is that once that has been pulled its never going to change again, but what if there is some critical issue in there.  The mission of that repo as communicated to me is to provide a stable point for users to start assembling their applications from, but it seems ludicrous to me to ignore critical bug fixes and vulnerabilities in these sorts of repos.

Would that mean that these disclosures must also include what durable unchanging repositories that are subject to the issue?
Comment 18 John Arthorne CLA 2011-05-26 16:14:42 EDT
Was this bug marked INVALID intentionally? Should we move discussion of disclosure into the master bug 337004?
Comment 19 Wayne Beaton CLA 2011-05-26 16:33:41 EDT
(In reply to comment #18)
> Was this bug marked INVALID intentionally? Should we move discussion of
> disclosure into the master bug 337004?

Er. No. This must have been the result of a keyboard stumble.
Comment 20 John Arthorne CLA 2011-05-27 08:54:27 EDT
(In reply to comment #17)
> Take indigo for example, the general understanding is that once that has been
> pulled its never going to change again, but what if there is some critical
> issue in there.  The mission of that repo as communicated to me is to provide a
> stable point for users to start assembling their applications from, but it
> seems ludicrous to me to ignore critical bug fixes and vulnerabilities in these
> sorts of repos.

I think the "durable repository" idea is just that you don't change the contents of jars already in that repository. So if someone wants to install feature X version a.b.c, they get the exact same result every time they do it. I think adding new versions to the repository, or patches, would be fine. This is similar to the fact we don't replace or change the contents of zip files containing old releases every time we have a new bug fix.
Comment 21 Thomas Watson CLA 2011-05-27 09:11:18 EDT
I agree with John, new versions can be uploaded to a repository and should be detected as updates.  The question is do we have a policy for releasing critical security fixes to release train repository outside of the two service releases sceduled or should these critical security fixes be published in a separate repository that is configured into our EPPs but empty up to when we have our first critical security fix for the release train.
Comment 22 John Arthorne CLA 2011-05-27 09:20:43 EDT
In an offline discussion with some of our IBM consumer community, the suggestion came up that announcing the disclosure date in advance would be very helpful. When consumers know they have a hard deadline to apply a security fix before it is disclosed, they are more motivated to get those fixes rolled out.

One possible way to automate this: once a security bug is marked fixed, a disclosure countdown starts. The "known issues" page would list those bug ids, including the title, target milestone, and disclosure date. Only people in the progressive disclosure group would be able to see the details at that point (those CC'd on the bug and all committers, and maybe all member companies have a way they can be added to the list). Once the date is reached, the PMC receives a regular nag email that they need to disclose it (remove the advisory group checkmark). Reopening the bug would reset the countdown (I'm hoping there is an easy way to query bugzilla about how long a bug has been in the fixed state).

The only thing I'm not sure about is what the "best before" date would be, or even whether all project could agree on such a date. You would think 3 months from fix date would be enough, but some eclipse consumers have large product stacks and customer bases that take time to roll fixes out to. Even something like 6 months would be a big improvement on our current state (where bugs are sometimes undisclosed a year after a fix is available).
Comment 23 Markus Knauer CLA 2011-05-27 09:26:00 EDT
There is always this little backdoor that we never used: The repository that defines the EPP package content is not part of the aggregated Simultaneous Release repository, but it is linked from that location as another composite repository (from time to time there were requests to aggregate the EPP repository into the Simultaneous Release repository). This means that you could use the EPP repository to push out changes to the packages independent from the Simultaneous Release repository. There is only one thing to keep in mind: p2 updates only if there are changes to the root feature, and all packages have exactly *one* root feature. The consequence is that if e.g. the Eclipse Platform team updates anything, that packages won't notice this unless the EPP package gets a feature update.
Comment 24 Wayne Beaton CLA 2011-05-27 10:56:49 EDT
The progressive disclosure group (all committers, along with those people who are referenced explicitly on the bug) can see all the bugs currently in the undisclosed state with this URL:

https://bugs.eclipse.org/bugs/buglist.cgi?query_format=advanced;field0-0-0=bug_group;bug_status=RESOLVED;type0-0-0=equals;value0-0-0=Security_Advisories

All undisclosed bugs that were resolved three months ago or more:

https://bugs.eclipse.org/bugs/buglist.cgi?chfieldto=-3m;query_format=advanced;chfield=bug_status;chfieldfrom=-100y;chfieldvalue=RESOLVED;field0-0-0=bug_group;type0-0-0=equals;value0-0-0=Security_Advisories

Is it enough to put links such as these on the Disclosures page?

Frankly, I'd prefer to make the countdown start from the time the bug is created.

https://bugs.eclipse.org/bugs/buglist.cgi?field0-3-0=resolution;type0-1-0=notequals;field0-1-0=resolution;field0-0-0=bug_group;value0-3-0=NOT_ECLIPSE;chfieldto=-3m;chfield=%5BBug%20creation%5D;query_format=advanced;value0-2-0=DUPLICATE;value0-1-0=INVALID;chfieldfrom=-100y;type0-3-0=notequals;field0-2-0=resolution;type0-0-0=equals;value0-0-0=Security_Advisories;type0-2-0=notequals

(note that I modified this last query to exclude DUPLICATE, INVALID, and NOT_ECLIPSE bugs)

The more that I think about this, the more that I think a hard deadline for disclosure is *absolutely* required. And that disclosure, regardless of state, should be automatic after that deadline has passed. By "automatic", I mean that I will likely do a weekly/monthly sweep.

Otherwise, a bug could potentially be kept private indefinitely by a consumer saying "not yet".
Comment 25 John Arthorne CLA 2011-06-10 14:34:20 EDT
(In reply to comment #24)
> Is it enough to put links such as these on the Disclosures page?

I just wanted to report that I have found these queries quite handy. Linking them on the known issues page makes sense to me ("Undisclosed bugs" / "Undisclosed bugs older than 3 months").

FWIW, if you check these links today you will see Eclipse currently has *zero* undisclosed security bugs.
Comment 26 Wayne Beaton CLA 2013-12-18 16:09:32 EST
I think that we're done here.