Playtesting, in my experience, generally requires serious, contiguous blocks of time, which is a rare commodity for me (and that competes with other activities).
Ty,
That's been my experience too with real playtests. Sadly, what usual passes for passes for playtests in the Age of the Internet bears no resemblance to real playtesting.
I've been involved in wargame playtesting since 1976 through a FLGS called "The Citadel" and it's owner Pat Flores. Pat was known to the companies of the period, primarily Avalon Hill and SPI. Depending on what facet of the playtests they entrusted him with, you were expected to spend at least a weekend day at the store helping him. Not a few hours, not an afternoon, a day. Arrive at 8am and leave at 8pm or later. Arrive late or leave early? Then don't bother coming back.
And the 12+ hour playtests only occurred when the playtest was pretty far along and they only wanted to see if someone could play the game out the box.
If you got involved in the real playtests, and you earned that right through your performance in the lesser run-throughs, you were at the store for entire weekends, not just days.
In these playtests and unlike any RPG playtest I have been involved with in the the last decade, we actually tackled basic design features. For example, we started every playtest by physically constructing the game's map and counters. That meant, along with other things, that we checked the accuracy of a map's features and scale along with the various designators on the game's counters.
A lot of the drudgery and consultations in these playtests occurred with the counters. The devil is truly in the details. The counters and the factors on them had to exhibit an internal consistency which matched the game's scale and intent. From a statistical standpoint, one unit could not be an outlier whose presence would unbalance the game - unless the historical unit it represented was also an outlier.
I can't stress enough that basic game design choices were routinely questioned during these playtests and I saw design choices changed often enough for it not to seem odd. The designer or designers would have chosen a battle, campaign, or war, and then would have chosen an aspect they wished to focus on. After determining the focus, they'd craft, adopt, or adjust game mechanisms to express that intent. It was when their proposed mechanisms met their intended focus that the wrangling began. What was seen as a no-brainer by a designer may not be seen the same way by a player and vice versa. A lot of phone time and postage stamps were wasted during these design choice discussions.
Contrast that to our current internet-based, manuscript playtest process. A playtest these days is nothing more than a glorified typo hunt. Typos, grammar, numbers, fact checking, confusing sentences, and other similar editing concerns are handled by the playtesters while the basic design choices are rarely examined if such an examination is allowed at all. This lack of examination of basic features can only hurt the products involved.
Richard Berg of all people, an ex-lawyer turned wargame designer and all around "difficult" human being, once told me that if a designer cannot succinctly explain their reasons behind a design choice to an informed layman, than the design choice in question has not been well thought out. Putting it another way, if you can't clearly explain your design choice you do not clearly understand it yourself.
Designers and writers have blindspots because designers and writer are human. When we limit playtest questioning, either out of misplaced pride or time constraints, to relatively inconsequential things like typos and other editing concerns, we fail to identify and correct the much larger problems blindspots create. When those large, more basic problems slip by, the result is a fundamentally flawed product which will be much harder to correct than a product with typos.
Regards,
Bill