Community
Participate
Working Groups
Created attachment 105059 [details] Patch with fix. The current implementation of CommandCache tracks the list of context which are currently "not available". The isTargetAvailable() then takes the given context and checks if it is a child of any of the cotnexts that are not available. However in non-stop multi-threaded debugging a container can be running and thus (non-available), while some of its children threads are suspended and thus available. The only side effect is that the MI services which expect the debugger to be suspended when they are created, now need to explicitly set the process context as available before using the command cache. I'm attaching a patch to fix this behavior by reversing the logic in the command cache implementation, to track available contexts instead.
I committed the fix. Marc, please review.
Created attachment 105069 [details] Small fixes to previous patch (In reply to comment #1) > I committed the fix. Marc, please review. I believe a couple of small changes were missing. Pawel, I will let you commit them to make sure you agree. Also, if I understand correctly, the rule is the following: if a context is available if: - the context is marked available in the command cache or - the context is a child of an available context This somewhat implies that if a child becomes unavailable, then all its parents become unavailable, right? (not all its children) If that is true, then I think that in setContextAvailable(), we should replace if (DMContexts.isAncestorOf(itr.next(), context)) { with if (DMContexts.isAncestorOf(context, itr.next()) { What do you think? This change is not part of the patch I attached, because I wasn't as sure about it.
(In reply to comment #2) Thank you Marc. I applied your patch. > Also, if I understand correctly, the rule is the following... Good point, I missed this logic change! I made this change as well.
I'm re-opening this bug because, although a container can be running in non-stop mode, we must still be able to send it the MIThreadListIds command to know what are its threads. Therefore, we have a problem when considering the container as unavailable when it is running. There may be other commands that can/must be sent to a running container also, but I haven't thought about it yet. Further, in non-stop mode, the GDB prompt is always available, so I wonder if we need to have the isTargetAvailable() method or we can simply rely on MI commands being rejected if they are sent to a running thread? In Francois' non-stop patch of bug 237556, he had to force the container to available manually to get things to work.
I think the initial description of this bug is problematic. In non-stop mode, the container (process) IS ALWAYS available even if the contained threads are not (i.e. running). It must be possible to send global run control commands to the contained threads (i.e. global suspend/resume, list threads to name a few). Right now we have a problem with the CommandCache in non-stop mode: although the non-stop run control works great (because it overrides the propagation of unavailability to the parent containers), it does not work so well with the other services (memory, expression, ...) which have no notion of all-stop/non-stop and don't perform that kind of control. The heart of the problem seems to be that CommandCache propagates the unavailability state assuming that we are always in all-stop mode. I believe it needs to be a bit more flexible, based on the run mode. For example, we could pass a boolean parameter on setTargetAvailable() so it knows if the state has to be propagated or not. The impacts of this one are fairly important: all users of CommandCache would have to pass this parameter thus breaking (again) the binary compatibility. More problematic: the services would also have to know if they are in all-stop or non-stop mode. This could possibly be solved by adding a "non-stop" property to the AbstractDsfService, which already has an fProperties field. (While we're at it, why not duplicate all the launch configuration fields - fLaunchProperties - into AbstractDsfService so every service has read access to them?) A less impacting change would be to have CommandCache figure out if we are in all-stop or non-stop mode. This would mean that CommandCache either has access to the launch configuration (how?) or that we introduce some kind of service (LaunchService) that keeps track of the launch configuration properties. This one might be more flexible in the long term. Ciao, /fc
(In reply to comment #4) > I'm re-opening this bug because, although a container can be running in > non-stop mode, we must still be able to send it the MIThreadListIds command to > know what are its threads.... It is up to the service itself as to what events should cause the cache to be "unavailable", so the run control service could simply not call setContextAvailable() when target resumes and everyone would be happy :-)
(In reply to comment #5) > ... The heart of the problem seems to be that CommandCache propagates the > unavailability state assuming that we are always in all-stop mode... I haven't looked at the non-stop changes yet so I probably don't understand the problem fully, but at the time when I committed this fix I actually believed that it should work equally well in both modes. In other words, even when in non-stop mode, if a container is suspended, then logically all its threads should be suspended too. And in the same way, when a whole process is resumed then all of its threads should then be running. But perhaps these are incorrect assumption about the nature of the debugger, and some kind of mode flag should be used to configure how the command cache determines whether a target is available.
(In reply to comment #6) > It is up to the service itself as to what events should cause the cache to be > "unavailable", so the run control service could simply not call > setContextAvailable() when target resumes and everyone would be happy :-) In non-stop RunControl, the thread context is not available but its container (the process) should still be. The way setContextAvailable() is implemented right now, setting the thread as unavailable also propagates to its container and prevents MI thread-info commands to be issued. One impact is that the DebugView is not updated as threads are created/terminated and a new thread cannot be selected and stopped unless the whole container is stopped or a breakpoint is hit. I'm really not sure that this is what the community wants. The easy way out (in RunControl) is to set the container as available right after having disabled the thread (which is how it is done in the patch). (In reply to comment #7) > In other words, even when in > non-stop mode, if a container is suspended, then logically all its threads > should be suspended too. And in the same way, when a whole process is resumed > then all of its threads should then be running. This is somewhat correct when you issue commands at the container level. What is debatable is that a container ever becomes unavailable in non-stop mode. Also, my understanding of the patch is that the process (container) becomes unavailable even if only 1 of its n thread resumes execution. > But perhaps these are > incorrect assumption about the nature of the debugger, Hum, you're probably right. I don't know about other non-stop debuggers, but non-stop GDB allows you to resume a single thread and keep the other ones stopped. So, minimally, the container should remain available as long as there is at least one thread stopped. I don't think that the patch covers that case. Anyway, I think that even this is too restrictive. Unless I have a fundamental misunderstanding about the nature of the container, it should always be available for commands in non-stop mode. > some kind of mode > flag should be used to configure how the command cache determines whether a > target is available. I tend to agree with you but I wonder if that flag shouldn't be passed to each service instead (aaaargghhhh...). E.g., in the case of the Expression service, there is no container, just a context that becomes unavailable as soon as anything resumes. The net effect is that you can't evaluate an expression (say in the Memory view) even though the variable is available from GDB. A case could be made that in non-stop mode, the Expression service should become unavailable only if the whole *process* becomes so. To do so, I believe we would need non-stop specific logic for this service (and probably other ones). I wonder if it would be possible to design the MI services so that they can be simply extended for the non-stop specifics. That would avoid a lot of code duplication and the service factory could be put to better use. This issue might be slightly more complex that it first appeared. Lots of fun in perspective :-)
Maybe this whole discussion should be moved to bug237556?
The cache issues have been fixed by putting better handling in the services themselves. I am marking this bug as fixed again.
I forgot to mark as verified
DD 1.1 reelased!