The Gemini sample-return vehicle, er hard-landed, yesterday (news, video - 6MB Quicktime) .The waiting choppers, piloted by famous dudes, were fortunately nowhere near (they weren't hired to stop a ballistic missile). The solar-wind sample discs are almost certainly mixed up with ambient bits of Utah, rendering them of little remaining interest to space scientists.
The tinkle (lost samples) is more of a problem to scientists than the smash, although not as newsworthy. Engineers, who deliver the science baby, should be pondering. Beagle-ers have heard it all before, and wished they had seen that one too.
Clearly, there is a reporting bias that brings disasters to the fore, so that spectators grumble in pubs "why can't they just do things right?". Putting aside that spin, and even allowing for the fact that in millions of everyday cases, innovation goes wrong before it goes right, there are long-standing questions about the right ways to perform such experiments.
Do engineers and their managers really know where they stand on the Scope vs Risk slope? Or are there deep gaps in our knowledge of the uncertainties? Would even greater modelling of outcomes or diagnositc instrumentation of the apparatus help or hinder? One has to remember that such efforts have to come out of the same project's budget, putting us on entirely different scope/risk hill. On the other hand, ontological polyfiller, like talk, is cheap.
Even if we did know where we stood in any given mission, the debate continues as to the best economic choice for a program BS6079: "group of strategically-related projects". Test test test and guarantee an occasional Rolls-Royce ride to the finish line? Or race learn race learn, sending waves of evolving Fords out to the track?
Remember, though, that your race entry fee includes the use of a space rocket and is priced accordingly.