| Summary: | Lockup in CSourceNotFoundDescriptionFactory | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Product: | [Tools] CDT | Reporter: | James Blackburn <jamesblackburn+eclipse> | ||||||||||
| Component: | cdt-debug-dsf-gdb | Assignee: | Marc Khouzam <marc.khouzam> | ||||||||||
| Status: | RESOLVED FIXED | QA Contact: | Marc Khouzam <marc.khouzam> | ||||||||||
| Severity: | critical | ||||||||||||
| Priority: | P3 | CC: | john.cortell, pawel.1.piech | ||||||||||
| Version: | 7.0 | Flags: | john.cortell:
review+
|
||||||||||
| Target Milestone: | 7.0.1 | ||||||||||||
| Hardware: | PC | ||||||||||||
| OS: | Linux-GTK | ||||||||||||
| Whiteboard: | |||||||||||||
| Attachments: |
|
||||||||||||
Hi James, Is the problem reproducable? If so how? If you can reproduce it, could you enable assertions (-ea) and see if there's anything in the error log? Created attachment 170029 [details] assertions to stderr (In reply to comment #1) > Is the problem reproducable? > If so how? > If you can reproduce it, could you enable assertions (-ea) and see if there's > anything in the error log? I'll have a quick go. I was attempting to use the DSF GDB process attach launch to debug a gdb server implementation being used as a backend for an already running debug session... I haven't yet been able to readily reproduce, but I switched back to CDI to debug the issue as DSF doesn't appear to be setting all my breakpoints -- I have more than 10 breakpoints in the BP view, DSF only seems to -break-insert one of them, whereas CDI inserts all of them... (They're all ordinary line number breakpoints.) I do run with -ea, and have been getting a bunch of assertions to stderr (attached). Thanks, the exceptions in stderr indicate a problem when launching, I don't think they are directly responsible for the hanging call to CSourceNotFoundDescriptionFactory$1.getDescription. If you manage to reproduce the problem please capture the errors. Also if you had -ea on while the bug occurred, maybe the exception is still in your .log file? (In reply to comment #3) > Also if you had -ea on while the bug > occurred, maybe the exception is still in your .log file? Unfortunately there's nothing interesting in the error log at the time of the crash (In reply to comment #2) > I do run with -ea, and have been getting a bunch of assertions to stderr > (attached). Do you have the corresponding 'gdb traces'? Maybe some of my assumptions about interrupting the target were wrong. Were both your host and target Linux? I gather your target was not running the FSF gdbserver, but your own implementation? As Pawel said, this is probably not related to the lockup though. (In reply to comment #5) > Do you have the corresponding 'gdb traces'? Maybe some of my assumptions about > interrupting the target were wrong. It was a complete UI lockup. Is there anyway to get the GDB traces without using the console? It was a runtime Eclipse being run under the PDE, but the sessions is now long gone :(. I looked at the backtrace and couldn't make much sense of it to gather more detail. Is there anything else to grab should this happen again? > Were both your host and target Linux? > I gather your target was not running the FSF gdbserver, but your own > implementation? > As Pawel said, this is probably not related to the lockup though. I agree this is likely unrelated. Everything was running locally; I was using CDI with my GDB server (remote simulator) and was using DSF attach to debug the remote server. Seemed like a good idea at the time :) If there's not enough information to reproduce / track down, then do close, I can always reopen if I see it again. UI lockups are scary, it would be nice if they weren't possible even if things go badly wrong in external processes. (In reply to comment #6) > (In reply to comment #5) > > Do you have the corresponding 'gdb traces'? Maybe some of my assumptions about > > interrupting the target were wrong. > > It was a complete UI lockup. Is there anyway to get the GDB traces without > using the console? The assert error should probably happen without the UI lockup. But just in case, you can start your eclipse with "-debug $HOME/dsf.debug.options" and have the file dsf.debug.options contain the lines org.eclipse.cdt.dsf/debugCache = true org.eclipse.cdt.dsf.gdb/debug = true Chasing those errors is not a high priority right now since they are not the cause of the UI lockup, so let's wait for that part until after Helios. Created attachment 171931 [details]
Stack trace and gdb traces
I can reproduce the deadlock!
I run a multi-thread program in non-stop mode.
I first resume the program after main(), then interrupt the first thread, select the a couple of stack frames, resume the thread, then interrupt the last thread, and BOOM!
I'm not sure which of those steps are really necessary, but it does reproduce the problem.
Attached is the stack trace and gdb traces.
Created attachment 171935 [details]
Fix
My bad.
This should fix the deadlock.
Can someone review? Pawel, John? (In reply to comment #10) > Can someone review? Pawel, John? Looks good to me. It also poses no chance of regression, IMO, so safe to put in at the last second. Committed to both HEAD and 7.0.1 *** cdt cvs genie on behalf of mkhouzam *** Bug 314447: Missing rm.done() [*] MIStack.java 1.17 http://dev.eclipse.org/viewcvs/index.cgi/org.eclipse.cdt/dsf-gdb/org.eclipse.cdt.dsf.gdb/src/org/eclipse/cdt/dsf/mi/service/MIStack.java?root=Tools_Project&r1=1.16&r2=1.17 [*] MIStack.java 1.16.2.1 http://dev.eclipse.org/viewcvs/index.cgi/org.eclipse.cdt/dsf-gdb/org.eclipse.cdt.dsf.gdb/src/org/eclipse/cdt/dsf/mi/service/MIStack.java?root=Tools_Project&r1=1.16&r2=1.16.2.1 |
Created attachment 169976 [details] backtrace While using DSF it locked up completely :( Build based on 2010-05-01 19:09:26 Backtrace attached.