OpenGL on Android — Basics and gesture handling

In this short series of articles I will try to present a basic Android application that uses OpenGL and shows simple pitch with an interactive soccer ball :). The second part will expand information on painting and drawing with shaders, and the third one on working with textures.

The final result is simple enough for beginners to start and so complex that we will cover many useful OpenGL topics.

I highly recommend you check official Android documentation about OpenGL as it contains much more detailed information and can be a good addition to this article.

A source code for the complete application is stored on our Github repository.

OpenGL on Android

Android is a visually perfect framework. For animations we have rich MotionLayout, there is support for the Lottie, and of course a bunch of more or less advanced transitions.

Many 2D drawings can be realised via Canvas API. Taking everything into consideration, the question is why discuss OpenGL at all?

There are still cases that require usage of a more powerful technology as OpenGL:

  • 3D graphic

There are obviously more reasons to apply this technology to your mobile application, but what may be the most encouraging thing is that OpenGL is mostly an independent platform.

Shaders, parameters, and objects can be transferred freely between different devices. Some parts of code may need to be adjusted to the syntax but concepts don’t change which is a serious advantage.

In this series I am not going to talk about different OpenGL ES versions, GLSL iterations or any other version specific things. I will try to show you the most version resistant recipe to create a working application.

Of course, there may still be things to do on your side in order to match with your current OpenGL version. About the OpenGL on Android and versions you can read here.

Architecture of program

In order to create a working application with OpenGL 4 core components are needed:

  1. OpenGL Surface — a view that places drawing surface in Android layout

In the most cases we can assume the dependency of components as shown below:

Surface in the layout

Under the hood of GLSurfaceView is the SurfaceView that takes Renderer. Renderer is an object that is responsible for invocation drawing with the correct parameters.

User interacts with the surface, so on this view we will handle drag actions and any gestures. Interception of these actions is as simple as in the other Android views. Just consume the event from the setOnTouchListener and parse its action.

Talking about touch events, new touch points or drag info should be passed down to the renderer object. Renderer should know the actual touch point position and decide how to draw the next frame.

In MainActivity we’re doing two important steps:

  1. Create a renderer and pass shape with shader to it

At the end we have to calculate the point for the OpenGL. X and Y must have relative values, which means that possible numbers are floats in range from 0 to 1.

And mostly this is all for Activity. PinchDetector along with scaleDetector is just a simple implementation of ScaleGestureDetector.SimpleOnScaleGestureListener() and its details are not very crucial for now.

Drawing frame by frame with Renderer

Renderer is our own implementation of GLSurfaceView.Renderer and it should take all inputs and translate them into a single frame.

In my class I have placed two features: dragging and scaling. Accordingly for them is a listener and a volatile variable which stores the current value. Let’s see:

SurfaceResolution and SurfacePoint are just simple data classes for wrapping sizes and coordinates.

For default an element will be placed in the center and after the user’s drag it will change its position.

In order to properly show the element on the surface I have limited its boundaries which is represented by minScale and maxScale.

Now, let’s have a look onto onSurfaceCreated, onSurfaceChanged and obvious onDrawFrame methods.

We have to pass to the shape three parameters:

  1. Dimensions of surface (this is needed in shader code to calculate pixel’s position)

That is all work done in this class. You have to remember that Renderer is mostly responsible for invocation of the draw() method with correct parameters.

Create a shape!

Most of the logic is done in the shape class. It defines shape (how many vertices it has), locates shaders in the program and passes parameters to the shaders. Much work for a single class…

First, let’s look at the coordinates of the rectangle:

GLShader is my interface for the shaders and it will be covered in the next part.

GLUtil is also my own class with useful code for OpenGL operations. Methods are long and complicated and I encourage you to find these methods in the source code.

Back to the class, we define 4 vertices needed to draw a rectangle on the whole surface. Two coordinates per vertex (x and y). If you are not familiar with triangle drawing go here.

Buffer is used to allocate vertices in the memory for OpenGL access. They will be accessible one by one in the vertex shader during laying out vertices on the surface.

Now the rest of the shape:

In the next steps we have to load and connect shaders to the main program. In the initProgram() method we also invoke glLinkProgram(), which can be perceived as a init method of a program.

Then we point via glUseProgram() which program uses during drawing and apply coordinates to the vertex shader.

I have cut off some lines between. These lines were responsible for adding uniform values to the fragment shader. We’ll tell more about them in the next part of the series in which we will describe shaders more briefly.

human-centric software design & development. check out our website:

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store