By James Turner
No software survives contact with its users - no matter how well it’s tested during development, users will find strange behaviours and bugs. They may complain to you or even publicly about them, while struggling to give consistent information to track down the underlying causes.
In this talk, we’ll use the example of adding direct crash- and error- reporting to a large desktop application, and the lessons learned as a result: around cross-platform portability, user behaviour, driver bugs and of course straightforward coding issues. The lessons contained apply to any software that’s deployed widely, whether it be on end-user machines, embedded devices or in the data-center.
We’ll cover the development workflow changes needed to include reporting and symbol information during automated builds, and other potential code changes to collect better feedback; briefly consider the privacy implications around automated reporting, and look at how aggregate analysis over the entire user-base can drive decision making about releases and bug-fixes.