Documentation
The first thing to be said here is that if you're looking for an API reference, I haven't been able to find it online. However, it's all on a help file in OpenNI's start menu group.
When it comes to OpenNI documentation, there's plenty of it, even though it's not very good, in general. The most extreme example of this I've found is
typedef XnUInt16 XnLabelOkay, PrimeSense... your users can read C. They can see that it defines a label type. What they want to know is: what does it label? Why is the label an unsigned 16-bit integer and not, say... a string? You need to go look at the code samples in order to figure that one out. In this case, it turns out that it is a label for which user is present in each image pixel. A pixel containing a 0 has no user present, a pixel containing a 1 belongs to user 1, and so on.
Defines the label type
From reading the documentation, it's also not very clear when, exactly, the callbacks are called. For instance, the LostUser callback isn't called until roughly ten seconds after the user has left the frame, although it will be called immediately if the sensor is completely obscured.
The "Psi" pose
The Kinect provides an OpenNI UserGenerator, capable of pose detection. The only pose it can detect is named "Psi". This appears in the samples as a magic constant, without a single comment, which makes you wonder where it came from or which pose it is. It turns out that there actually is documentation describing this, but not in the API reference or the user guide, where you might look. It's actually in a PDF document called "NITE Algorithms", in NITE's program group, but I'll save you the time. You need to be standing up, with your arms making a 90º angle with your torso, and your forearms and hands pointing up, making you look like the greek letter, psi. Starting from this pose, and after a ~3 second calibration, you'll be able to track the user's skeleton.
And that's it, for today. I hope you found this useful.