It seems like a lot of games (especially ones based on Unity) seem to have a pretty bad setup for their input event handler. Looking at the SDK, it looks like all controller events go through the same View.onMotionEvent() hook, meaning it's getting one event for every fine-grained change in the analog sticks' position. The thing I've noticed in a lot of games is that wiggling the stick between frames causes a lot of motion events to pile up, which then take several seconds to replay - not very helpful for timing-sensitive games (i.e. most of them). It also seems like in some of these games, holding the stick doesn't have much effect, either.
I suspect that these problems come from games trying to do too much in onMotionEvent(), and are possibly doing their OpenGL repaint from there; a much better approach is for onMotionEvent to simply record the new stick positions, and then drive the player motion and such from the renderer's onDrawFrame() handler (so that you're guaranteed to have one and only one position update per frame, and that it's the render rate that essentially polls the joystick positions rather than having joystick position updates attempt to push renders faster than the Ouya can handle it).
Is there any section of the developer site that tracks best-practice/common-pitfall issues like these? I really want to see Ouya games actually be playable and fun, instead of frustrating experiences that fall apart on real hardware.
Comments
Sorry for the late response but I waited for the retail version to start development.
@Override
public boolean onGenericMotionEvent(MotionEvent event)
{
int playerIndex = OuyaController.getPlayerNumByDeviceId(event.getDeviceId());
// handle joystick events
if(event.getSource() == InputDevice.SOURCE_JOYSTICK && event.getAction() == MotionEvent.ACTION_MOVE)
{
// only process analog events every 1/60 (0.016) seconds
float timeDiff = ((float)event.getEventTime() / 1000.0f) - lastAnalogTime[playerIndex];
if(timeDiff < 0.016f)
return super.onGenericMotionEvent(event);
// get the coordinates for both axes and the trigger values
float xLeft = event.getAxisValue(MotionEvent.AXIS_X);
float yLeft = -event.getAxisValue(MotionEvent.AXIS_Y);
float xRight = event.getAxisValue(MotionEvent.AXIS_Z);
float yRight = -event.getAxisValue(MotionEvent.AXIS_RZ);
float lTrigger = event.getAxisValue(MotionEvent.AXIS_LTRIGGER);
float rTrigger = event.getAxisValue(MotionEvent.AXIS_RTRIGGER);
float xLeftDiff = Math.abs(xLeft - lastLeftX);
float yLeftDiff = Math.abs(yLeft - lastLeftY);
float xRightDiff = Math.abs(xRight - lastRightX);
float yRightDiff = Math.abs(yRight - lastRightY);
float lTriggerDiff = Math.abs(lTrigger - lastTriggerL);
float rTriggerDiff = Math.abs(rTrigger - lastTriggerR);
// update last frame data
lastAnalogTime[playerIndex] = event.getEventTime() / 1000.0f;
lastLeftX = xLeft;
lastLeftY = yLeft;
lastRightX = xRight;
lastRightY = yRight;
lastTriggerL = lTrigger;
lastTriggerR = rTrigger;
// make sure that the change in analog or trigger state is worth the JNI call
if( xLeftDiff > kEpsilon || yLeftDiff > kEpsilon ||
xRightDiff > kEpsilon || yRightDiff > kEpsilon ||
lTriggerDiff > kEpsilon || rTriggerDiff > kEpsilon)
NativeLib.OnAnalogMotion(playerIndex, xLeft, yLeft, xRight, yRight, lTrigger, rTrigger);
}
// handle mouse events
if(event.getSource() == InputDevice.SOURCE_MOUSE)
{
//float xMouse = event.getX();
//float yMouse = event.getY();
}
return super.onGenericMotionEvent(event);
}
This might be overkill and actually be an expensive culling optimization.
Here's a decent link to this very question/presumption http://stackoverflow.com/questions/7699020/what-makes-jni-calls-slow
So in the end I chose to pass all events through to my JNI code through intrinsic type parameters, (NO data structures or complex Java types), and keep the Java code as short and simple as possible.
The big reasoning for me was that I did NOT want to induce the GC in any way, shape or form. I am not a 24-7 Java programmer, but what I know about the GC and the subtle ways that it can be invoked, (more like awakening ye ole apocalyptic dragon of doom), are nothing short of insanity to my eyes.
So far I can detect no Java Event -> JNI -> C++ slowdowns and I never see the GC fire...Nirvana.
I'm guessing your approach has more setup time as you are required to cache Java objects and deal with threads, whereas I just simply wrap a minimal Java activity. But your approach obviously has much more potential for performance as once the initial setup has been processed you have much more control over when the JNI boundary is crossed.
I'll be following your posts closely :)
I didn't even know the Nvidia Shield was a available for pre-order, thanks for the heads up.