I've been thinking about software testing quite a bit lately. I've been thinking about the artificial intelligence/algorithms used in game programming too. And so naturally, at some point it occurred to me that the same algorithms used to make all those characters, monsters, cars, or whatever meander in and out of the background scenery of our favorite games might be useful in software testing scenarios.
Imagine you need to test a particularly hard-to-pin down bug - one that only happens during periods of high usage. It seems to me that you have a couple of options. You can force your users to pitch in and help with testing (not likely, and certainly not popular). You can beg your fellow programmers to do their best to mimic users (also not likely or popular). Or lastly (wishful thinking) you could fire up some program which will spawn an army of autonomous agents to meander through common use cases and behave as normal users creating the necessary "background traffic" needed for testing.
This is currently just an idea in its infancy, I don't know of any actual implementations. But it does seem like an interesting area for some research and coding. One could imagine initial versions would need lists of steps that could be invoked at random intervals and sequences by many agents through some programming interface. However, far off future versions might know how to inspect user interfaces for menus, buttons, and dialog boxes - simulating mouse clicks and keyboard entry (valid or invalid) until something happens in response.
Although I haven't found anything this sophisticated yet, it occurs to me that areas of security research might already have made progress on similar tools. Black-box penetration testing usually involves sending random (or perhaps not so random) payloads in an attempt to illicit an unexpected error or otherwise interesting response.
So, any takers want to whip something like this up for me?