Yeah, "beta" has a fuzzy definition. For some (mainly MBA folks), it means any testing done with real customers, and says nothing about the quality or capabilities of what is being tested. For the rest of us, "beta" generally means the first round of external "all up" testing, where the software is "essentially" complete, but with a (possibly substantial) list of bugs.
From a software perspective, this looks to be alpha-level code. Huge parts of the promised functionality are still missing or incomplete. But important pieces are present that can be tested.
From a hardware perspective, it should be a gamma test (where "gamma" means something that's "done" but is delayed by other dependencies, such as software, finances or logistics).
From a business perspective, it's a beta only because "outsiders" are involved (other than employees, investors, friends and family).
Properly sizing a beta can be tricky: Make it too small (not enough tough testers included), and nasty bugs may not be revealed as early as one would like. Too large, and triaging bug reports can absorb all available time.
Beta sizing is one key place where several software engineering tools prove their worth, primarily in the areas of requirements tracing, test coverage, static code analysis, bug tracker status and history, repository activity, and other measures of software progress and quality, and programmer productivity.
One of the important values to be created is the net ability of the software staff to verify a bug exists (write a clear description and a test for it) and to clear the bug (check in a patch that causes the test to pass). A properly-sized beta will saturate the software staff, but not overly so. It is meant to be a sprint, not a death march.
It is vital to ensure the software staff is doing nothing but software. Other employees (or contractors) can take reports from the beta testers, do the steps needed to verify a software bug actually exists (not a documentation bug), and enter it into the bug tracker. A junior programmer can write a test script. The senior programmers should be doing nothing other than working the "worst" bugs from the top of the stack, and when this isn't clear, kick bug reports back for clarification.
To the greatest extent possible, bugs should be clumped into short sprints, to increase productivity by attacking related bugs together, and to ensure beta testers are given fresh software as often as possible (far faster than a "normal" release schedule).
The software staff should have no direct interaction with beta testers, since that's a waste of their over-committed time. If a developer has a question about an issue report, that question should be relayed to the beta tester by other staff. The only possible exception is when the beta tester is a domain expert who can directly contribute to the solution (that is, a clear and obvious time saver).
Having a good beta means getting good data from the beta testers. And few folks are naturally good testers! Start them off with a list of "known working" exercises to perform to quickly gain familiarity with the system (and debug the documentation). Before turning them loose, give them a few exercises that include known bugs, and walk them through making complete issue submissions. Ensure they fill out a simple form for each issue they encounter (what was expected, what happened instead, and the steps needed to reproduce it).
That is to say, the first step in a beta is to train the beta users to become useful testers. Good observers. Users who take notes of what they do as they do it, rather than rely on memory. Make them familiar with the beta reporting tools, and how to correctly use them. In a perfect world, all the tools the user needs would already be built-in. At the very least, it should be trivially easy for the user to cause key pieces of information to be generated and included with the issue report (e.g., screen shots, execution and interaction traces, other logs).
I could go on and on. What I've been describing is more correctly called a "closed" or "limited" beta, as compared to an "open" beta. An open beta is really just a slow product roll-out, with controlled growth and no user preparation other than the included documentation. WigWag is nowhere close to being ready for an open beta: Their open beta will be shipping to all KS backers.
From another perspective, beta testing should be viewed as a large experiment, much like a medical study, where good data needs to be "extracted" from non-experts. There's a whole field of study that concerns the proper design of all kinds of experiments that's called, oddly enough "Design of Experiments". It amazes me that many science and engineering degrees still fail to include even a single course on this vital topic.
An uncontrolled or poorly designed beta yields confusion, frustration and schedule delays. A good beta focuses the team and lights the afterburners.
You should know if you have a good or bad beta before the first beta test packages are shipped. If you don't know, then you likely have a bad beta. Fortunately, there are consultants who set up and run beta tests for a living.