Saturday, March 13, 2010

Higher Learning

One thing that I've come to realize about programming is that if you want to create better programs you need a better understanding of the languages, tools, and process you're using. Now, as great a tool as the internet can be in quickly finding information, I still have a soft spot for a solid book that covers a subject in depth and detail. Actually finding books of this caliber can be quite difficult, but when you find them they are truly a godsend. The two most recent additions to my programming library are OpenGL SuperBible Fourth Edition and The iPhone Developer's Cookbook: Building Applications with the iPhone 3.0 SDK 2nd Edition.


Both these books are monsters, weighing in at about 1,200 pages and 800 pages respectively. Diving into them, though, has been satisfying and relieving. It's nice to have solid reference material available vs. wading through google search listings.

Saturday, March 6, 2010

Pixel Fonts & Texturing

Typically, when you work in a new programming environment, the first program you write is "Hello World!". When writing a complex program, it is essential to display program specific information to the screen, while the program is running. A "Hello World!" program lays the groundwork to achieve this.

Now, as simple as displaying "Hello World" on the screen might seem, in an OpenGL ES environment on the iPhone, this is anything but. There is no built in font. There are no built in routines to handle pre-made fonts. Essentially, you have to do it yourself. That, or find some pre-made code, which I'm not the biggest fan of in this case. The reality is, custom font routines are a necessary evil when you work in a graphical environment. To do them right takes a considerable amount of effort, and the payoff seems very small. When you finally have a solid, working set of custom font routines, showing them off just doesn't really produce that wow factor. It's about as exciting as seeing "Hello World!" on the screen. From a production value standpoint, though, they definitely make a difference and to me it's the difference between an okay program and a excellent program.

Now, the end goal of a custom font routine is to have code that looks something like:
displayFont(customFont, x, y, "Hello World!")

This encapsulates our core functionality, which is to display a string of text, in a custom font at an (x,y) position on the screen. The way we do this is by stepping through each character of the text string and converting it into a code, typically ASCII. We use this code as an index into an array of glyph (character) images. We then display the glyph at the current (x,y) position and then advance x by the length of the glyph, so that the next glyph is displayed after the current one, instead of on top of it. Generally, this is the easy part.

The hard part is actually constructing the glyph image array. Since we convert each text string character into a code to use as an index into this array, we must ensure that these glyph images are ordered according to the codes we're using. ASCII codes are the typical choice here. To achieve this, we need to construct our glyph images in order and in such a way that we can write a routine to strip each glyph from our font image. The easiest way to do this is to construct the glyphs in a grid, one glyph per cell. The most important part here is that every grid cell should have the same dimensions. This uniformity is what allows our routine to quickly strip each glyph from the font image.

Okay, so now that we have a system in place, it's time for the lovely task of creating our font! This is a real time killer. I don't spend too much time making the first font. In fact, I usually only do numbers, and uppercase letters. The first font you build is really a place holder to ensure that your system actually works correctly. Having done this many times, in many languages, I opted to use the power of the internet to see if I couldn't find some decent, free pixel fonts. Remember, in order to use a pre-made pixel font, the glyphs need to be contained in a grid in ASCII code order, one glyph per cell, with cells of the same size. After wading through that monster list, I chose to start with font 159.


This font is a clean, simple, single color font with well suited dimensions for display on the iPhone. Not too big and not too small. 

With our font selected, now comes the challenge of loading the font image and accessing the pixel data so we can strip out the individual glyphs. I must say, this part of the process is a hellish nightmare. Here's what needs to happen:
  1. load font image
  2. access image pixel data
  3. properly enable OpenGL to handle texture mapping
  4. pass pixel data to OpenGL as a texture
  5. strip glyphs from texture
 In practice, all of these steps are specific to both the iPhone platform, and OpenGL

To load our image we use the UIImage imageNamed: method to save a CGImageRef of the resultant CGImage:
CGImageRef textureImage = [UIImage imageNamed:filename].CGImage;

Next we need to allocate a block of memory to place the image pixel data. This is the data we will pass to OpenGL for our texture. We use C's malloc() to allocate this block of bytes. The number of bytes we need to allocate is equal to textureWidth * textureHeight * bytesPerPixel. If you're working with 32bit RGBA textures, you've got 1byte per color component, or 4bytes per pixel.
GLubyte *textureData = (GLubyte *)malloc(textureWidth * textureHeight * bytesPerPixel);

With our memory allocated, we need to transfer our image to this memory block. This gives us direct access to the pixel data because we have direct access to the memory block that will contain that data. In order to transfer our image, we  need to create a Core Graphics context. I think of a context as a lens through which your source image passes and is translated into the correct digital information. We use CGBitmapContextCreate() to create our CGContextRef. We then transfer the image using CGContextDrawImage() with our CGContextRef. With our image transfered, we have no more need for the context, so we can release it via CGContextRelease().

Now that we have direct access to our image pixel data, we need to transfer it to OpenGL. Once transferred, OpenGL keeps it's copy of the image as a texture, which it can render with incredible speed through hardware acceleration. It must be noted that OpenGL ES only accepts textures with dimensions that are powers of 2 (1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024). The dimensions don't need to be the same size, but they MUST be powers of 2. Now, before we pass the texture, we need to create a name with which we can reference the texture later. In essence, we tell OpenGL that we want to draw this named texture, and with the name it knows which texture to draw. This name is really a unique number that we ask OpenGL to give us using glGenTextures(). After generating a texture name, we use glBindTexture() to tell OpenGL specifically which texture we'd like to work with. Finally, we use  glTexImage2D() to pass our image pixel data to that bound texture.

After passing the texture to OpenGL, we no longer need our copy of the image, so we use C's free() function to release the memory block containing it.

Once OpenGL has our texture, we can specify which region of the texture we want to display by passing the coordinates of that region using glTexCoordPointer(). Be aware that texture space extends from 0.0 to 1.0 along the UV axes. This means to translate from our original image pixel space to OpenGL texture space, we need to divide by the dimensions of our original image.
texture.u = pixel.x / image.width
texture.v = pixel.y / image.height

These UV coordinates are the coordinates that we need to pass to OpenGL, specifying the region of the texture we wish to display. 

Finally, this is how we strip the glyphs from the texture! We step through each cell of the grid in the texture and calculate the UV coordinates of the corners of each cell. We then store these coordinates into the glyph array. Since the glyphs are ordered in the texture itself, we are making ordered entries into the glyph array.

Alright, final step here. To actually display our texture regions OpenGL expects a few things. First, we need to enable texturing with glEnable(GL_TEXTURE_2D). Then we need to specify which texture we want to work with using glBindTexture(). Next, we need to tell OpenGL how we want it to scale our textures:
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

GL_LINEAR makes scaled textures blurry. We can use GL_NEAREST if we want blocky scaled textures. Next, we enable the texture coordinate array using glEnableClientState(GL_TEXTURE_COORD_ARRAY). This let's OpenGL know that you are going to pass it a set of UV coordinates. Then we tell OpenGL where the UV coordinates are stored using glTexCoordPointer(). Finally, we have OpenGL draw everything using glDrawArrays(). From here we just need to disable the texture coordinate array: glDisableClientState(GL_TEXTURE_COORD_ARRAY) and then disable texture mapping: glDisable(GL_TEXTURE_2D).

Alright! So, that covers the fundamentals of loading and displaying textures and using OpenGL to render a pixel font and as you can see it takes quite a bit of effort to achieve. It's taken me about a week, and a lot of frustrating days to get this up and running correctly on the iPhone, but I'm loving the results.

Saturday, February 27, 2010

Hardware Woes

In order to test your apps on the iPhone, Apple makes you jump through a substantial number of hoops. I went through this process back in August when I initially started testing my apps on the iPhone hardware. However, the required provisioning profile to do this is only good for 3 months. Since the expiration of my previous profile, I've been dreading going through this process again but I felt it was time and so I spent a couple hours yesterday doing just that. I also took some time out to document my efforts via Wiki, so 3 months from now this process won't be nearly as tedious.

I must say, there's just something magical about running your apps on the hardware, especially the more touch intensive ones. I get very excited seeing my work running on the device. However, it became readily apparent that, graphically, there's a HUGE difference between the results displayed in the iPhone Simulator and on the iPhone hardware itself.

 





Tuesday, February 23, 2010

A Better Line

I was a bit hesitant to update with all the little things I've been doing but after chatting with a buddy of mine it's apparent that all those little things add up. I guess this will give me a steady supply of things to post here which is how I think I prefer it anyway.

On the graphics front, I found out that OpenGL actually supports anti-aliased lines as well as lines with thickness. This is quite a godsend as writing those routines is a bit tedious. Also, though I haven't tested it, I'm sure having the hardware render them is much faster than writing a software version anyway. I'm also pleased that I can easily do translucency with these lines as well.

 
After some research and more playing around I also discovered that I can write straight C functions alongside my Objective-C code. Now, I don't have to hack my preferred syntax for my drawing primitives using macros like I was doing previously. The star above was generated using such a routine. The results, though, demonstrate a problem with GLs thick lines in that as far as I know there's no way to generate end caps at the end points of the lines, making them rounded or beveled. Everything I've found indicates that I'll have to write this functionality myself if I want it.
One major benefit to having the anti-aliased lines available in GL is that I can use them to generate anti-aliased circles using a technique similar to the brute force method I used previously. Not only that, but the actual results are significantly better, displaying few visible artifacts.



Saturday, February 20, 2010

Sub-Pixel Circle Fill Prototype

So, I forgot how tedious these sub-pixel primitive routines are to write. It makes me wish I had documented my techniques for producing them better. Anyway, after chatting with one of my fellow coders, I was able to get alpha blending (translucency) up and running. Big props to syn9 for showing me the way on that one.

Alpha blending is an essential step for producing sub-pixel primitives, and with that step realized, I decided to prototype a sub-pixel circle fill routine.  These results show promise, but I'm getting some artifacts. At certain radius sizes there are some pixels that get rendered out of place. Also, you might notice those two white diagonal lines. I have no idea where they're coming from, but I'm getting some sort of screen tearing as a result of them. Hopefully I can isolate and rid these issues so I can move forward with a finalized routine.

As far as my approach, I went with something quick and dirty just to get some approximate results. First I created a pixel drawing routine. It's funny, because GL takes a geometry based approach to rendering, so you really don't have direct pixel level access when it comes to custom rendering. So, to simulate pixel drawing, my pixel routine simply produces a GL line from (x,y)-(x+1,y+1). I'm almost certain there's a better and faster way to do this, but this works for now.

As far as generating the primitive, I'm using a brute force sampling technique. Any circle can be bounded by a box with length and height equal to twice the circle's radius. After determining the coordinates of this bounding box, I test each pixel in the box. If the pixel lies outside the circle, we don't draw it. If it lies inside the circle, we draw it using the circle color. If it lies near the circle we draw it using a percentage of the circle color. These "percentage" pixels, if you will, are what produce the visually pleasing quality of a sub-pixel primitive.

As for the pixel test, the circle and the bounding box both share the same center. By definition, every point on a circle is the same distance from its center. This distance is known as the radius. Using the distance formula, we test the distances of each pixel from the center point. A distance greater than the radius means that the pixel lies outside the circle, while a distance less than the radius means the pixel lies inside the circle. A distance near the radius means the pixel lies close to or on the circle.

Thursday, February 18, 2010

Fancy Drawing Primitives

In a previous coding life I wrote a set of drawing primitives to render non-standard shapes like thick rounded lines, rounded boxes and rings. I even went as far as to incorporate anti-aliasing into them which made them look really slick. Even the simplest programs looked quite professional when rendered using, what I called, "sub-pixel" drawing primitives.

It would be nice to bring these routines into my iphone projects, especially if I have to create my own GUI. The thing is, all my prior graphics programming was based on direct video memory access. Sampling and writing to video memory was rather trivial, though somewhat slow because it was all done in software. Now that I'm working with OpenGL, I really don't have that direct access to video memory. This has required me to change my rendering strategy. Whereas before I was writing pixels directly into memory, now I'm forced to render my primitives using various geometries (triangles, boxes, lines). I still haven't figured out how or if I can sample pixels, though, which is essential to creating anti-aliased primitives.

I prototyped a rounded box routine today, which gives me hope for the rest these cool drawing primitives. I've incorporated this primitive into the slider program. Currently, I'm using OpenGLs GL_LINE_LOOP for basic primitives and GL_TRIANGLE_FAN for filled primitives. The rendering code is a bit clunky, being a prototype, but I like these initial results. However, I'd like to change the rendering to use GL_LINES to render the box accurately using line strips as opposed to approximating it using geometry.

 The last thing I'd like to note is that I'm still not entirely comfortable with Objective-C's method syntax. From what I understand, everything in Objective-C is essentially an object which means all your routines are really methods of some object. Objective-C uses a bracket syntax to invoke methods:

[object methodNameParameter1:p1 Parameter2:p2 ParameterN: pN];.

To invoke a "local" method you use "self" as the object. So, all my rendering calls look something like:

[self drawPrimitiveX1:x1 y1:y1 x2:x2 y2:y2 color:color];

which just feels clumsy and redundant to me as opposed to something like:

drawPrimitive(x1,y1,x2,y2,color);

I created a workaround, though, by using macros. It accepts the syntax I like and expands it into the necessary objective-c code:

#define drawPrimitive(px1,py1,px2,py2,pColor) [self drawPrimitiveX1:(px1) y1:(py1) x2:(px2 )y2:(py2) color:pColor]

We'll see how this plays out as I refine the rendering code and interface. Also, as my programs which rely on these primitives get bigger and more complex, I'm sure other problems will present themselves.

Wednesday, February 17, 2010

More GUI Programming


So, from my understanding, you can't really mix apple's fancy GUI controls with an OpenGL scene. Also, from my experience you can't really make a graphically intensive game without using OpenGL. So, looks like I once again have to build my own GUI routines from the ground up if I want to have slick interfaces in my projects. For the past few days I've been laying the ground work for the core GUI widgets (buttons, sliders, grids). I've finished the prototype for the sliders and constructed a simple demo that allows you to use them to change the color of a displayed circle. Each slider represents an RGB color component. Since I don't have a font engine up and running yet, I used red, green and blue buttons to show the component each slider modifies.

Tuesday, February 16, 2010

A Fresh Start

It's time to fire up xcode once again and take another crack at iPhone development. I've decided to document my progress and this blog is one way I intend to make that happen. Fortunately, I have all my previous test projects so I've got some ground work already in place. However, I already know that I've still got a long way to go before I have results comparable to my work on other platforms. Nothing is more humbling than coding in a new language and a new environment from scratch.