Some Nerdy Stuff

March 22, 2009

Dual Purpose Types/Interfaces Where Semantics Conflict

Filed under: Uncategorized — aaronls @ 3:03 am

Sometimes developers will create a typedef/class/struct and use it for more than one conceptual purpose.  In the case of the legacy C code I was porting to C++/CLI, they had a typedef of UU8 to use as shorthand for an unsigned char.  They used this UU8 type anywhere they needed a byte, a char, or char array as a string.  I then tried to compile this program in VS 2008 as a native C++ app(I took things one step at a time, not yet enabling the CLR at this point).  I got almost a hundred errors regarding UU8* failing to cast to a char*.  This is to be expected, because by default char is signed.  There are compiler options that change this default, but they were not present in the VS 6.0 projects, nor did the command line options for VS 2008’s compiler effect this either.  I resorted to changing the typedef to signed char.  The application compiled successfully and run, but I noticed that some of the graphics in the C based UI were slightly corrupt, and a couple of other critical features of the application were not working.    I assumed these issues were caused by compiling the C code as C++.  After a significant amount of investigation, I realized that the UU8 type was being used for some low level calculations.  Because it was now signed, the calculations were failing.

What I really needed, was to split the usage of the type into two categories.  I created a typedef for the signed char, and reverted the old UU8 typedef to unsigned char.  I then had to go through the code and systematically replace any usage of UU8 with the new signed char type, if it was being used as a char or string.   This meant digging through code I was not familiar with, and analyzing the semantics of the usage of the UU8 type everywhere that it appeared, before deciding whether it should actually be using signed char.  In some cases I got carried away and made the replacement where I shouldn’t have.  Some places would have a variable or parameter called something like ClassDesc or ItemDesc, and the Desc suffix would lead me to believe it was a string, when actually it was some data intended to be used as UU8.

This is the problem of trying to take an existing declaration of a typedef, class, struct, or other type, and reuse it where it was not intended.  It would have been one extra line of code for the developer to declare two different typedefs, instead of sharing the typedef across two totally different types.  I don’t knock the original developer though, as it is clear he was leaning towards productivity, rather than maintainability/readability, which is perfectly valid given the type of project.

Where this really affects readability is in a case(which occurred quite often in the above project) where a function takes multiple parameters, and some of those parameters are decalred as the same type, but expect totally different conceptual types of data.

For example something like this is really ambiguous and not self documenting:

FindItemName(UU8 * SearchName, UU8 * SearchOption, UU8 * ToBeSearched);

SearchOptions is a pointer for passing flags that affect the behavior of the search function.  It is essentially a piece of binary data that will more than likely be operated on with lots of bitwise operations.  SearchName and ToBeSearched are char arrays to be operated on as strings.  This makes calling this function very confusing, and when you run into cases where the names of the parameters imply the opposite (such as the “Desc” suffix example I gave above), then it becomes two fold as confusing, because you then have to consider 2 to the second power possible meanings of each variable.

This is not a problem isolated to C or C++.  C# as well can experience this problem, and developers are even tempted to create this problem.  Interoperability using classes and interfaces is strongly encouraged.  Some developers go as far as to create an interface for every class they create.  Sometimes they try to force classes into having common interfaces for the sake of reusing code, even when they don’t yet have, or anticipate, any concrete implementations that would take advantage of the commonality.  You run into cases where very different classes share an interface, but the behavior of the interface is so vastly different between each class’s implementation that you’d never ever want to use them interchangeably.

One must not be tempted to force a class into an interface where it does not conceptually or semantically fit, just as one should not try to fit the square block into the round hole.  The .NET Framework Design Guidelines book provides some tips along the lines of having an implementation of at least one class for each interface, to exercise that interface and make sure the interface essentially makes sense.  I would go further to say that additional classes implementing that interface should pass a usage test.  Consider existing code that interacts with the first class, via its interface.  Now consider what would happen if the new class that implements this interface was used in the existing code.  It will, of course, compile because the code accesses the class via the common interface, but the real question is whether its usage makes conceptual sense, and does not result in logic errors.

If we look at an interface like IDisposable, what we see is that it does not just defined syntactically, but is also defined conceptually.  It is expected that classes implementing methods of IDisposable, such as Dispose, implement them with a certain behavior.  Should the class implement the methods only so much as to allow for successful compilation by meeting the syntactic requirements, but do not meet the behavioral/semantic requirements, then users of the class will be very surprised by the problems they encounter.  If the class has unmanaged resources, then an improper implementation of the Dispose method will result in resource/memory management issues for users of that class.  Maybe, as the developer of the reusable class, I decided to have a Free() method that releases a file handle, and did not implement this functionality in the Dispose method.  This would be stupid, as users might instantiate my class using a the using keyword, and be surprised that the file handle is not deterministically released.  I failed to follow the behavioral definition of implementing IDisposable, and thus my use of the interface does more harm than good by luring my users into a false belief that calling Dispose deterministically releases unmanaged resources.

March 19, 2009

CLR Enabled

Filed under: Uncategorized — aaronls @ 2:16 am

After porting my current hobby project over to VS 2008, I was having a problem with the OpenProcess function failing with “Access is denied”.   As it turns out, the preprocessor defined symbol PROCESS_ALL_ACCESS changed its definition at some point.  Previously in VS 6.0, it was defined as:

STANDARD_RIGHTS_REQUIRED | SYNCHRONIZE | 0xFFF

In VS 2008  it is defined as:

STANDARD_RIGHTS_REQUIRED | SYNCHRONIZE | 0xFFFF

I’m not sure the significance of the extra F, but I replaced PROCESS_ALL_ACCESS with (STANDARD_RIGHTS_REQUIRED | SYNCHRONIZE | 0xFFF) to sidestep the extra hex digit, and the OpenProcess function call succeeded.

Then I enabled CLR support in the project settings and everything continued to work.  Wow, I can now start playing with CLR stuff in C++.  This is new ground for me.  Despite my experience with .NET in C#, and my undergrad C++ experience, I have this feeling it is going to be rough.  I can’t imagine all the wacky things I’m going to have to do to get C++ to play nicely with the managed memory system of the CLR.

March 12, 2009

Build Succeeded!

Filed under: Uncategorized — aaronls @ 4:39 am

In regards to my previous post, I dusted off an old open source C project I had downloaded and never got to successfully compile and run.  I spent just a few hours tonight with it and started fresh.   I got it to compile in VS 6.0, and then also got it to compile in VS 2008 as a native Win32 application.  I even ran it and got further than ever before.  The program is so old that there are some things broken though, and it’ll need some debugging. However,  I’m excited to have gotten over a big hump.  We’ll see what obstacles lie further down the road.

After debugging is complete, I’ll try to port it to managed C++, or whatever the hell they are calling it these days.   That way I can hopefully develop the Windows Forms with the more productive .NET APIs.  The C library the project is using to manage the GUI is archaic as hell.  It looks as though the previous developer probably spent alot of time writing the GUI.  Although, I must admit it is pretty.

I might even code all the .NET stuff in a C# DLL that the managed C++ project will reference.

I am having an issue with a couple of DLLs the program is dependent on.  I pretty much have to manually copy them to the output directory.  In the C++ project, when I right click a file, like my dependent DLL, there is no “Copy to Output” option as there normally is in other types of VS projects :*(

March 9, 2009

Personal Projects

Filed under: Uncategorized — aaronls @ 3:13 am

I’ve always tried to start on doing some programming on personal projects, but I never get very far before something pulls me away.  When it comes to something opensource, I always find myself working with C++, which I am not as good with as I am C#, and getting over the hump of getting it to compile the first time seems the one thing that I have a hard time with.  It is often the least documented thing, or even if it is documented, they’ve changed the way they are doing things drastically since it was documented.

Where I was previously a lead programmer, I always stressed the importance of making sure it is easy to pull down a projet and compile it.  All the dependencies should be easy to pull down and resolve.  Alot of developer time can be wasted if you have a shop where people are hopping from project to project and often find themselves on a computer where they need to pull down the newest version of a project.

Since I’m trying to find a less demanding job or begin a degree, then maybe now I will actually have time to accomplish something.

The complexity of “getting back into” something after some down time, is the same reason I never finish complex games.  If it’s one of those games that uses almost every key on your keyboard, then I can never remember them all, and even though you can normally look in the configuration, it’s still not very fun trying to get your bearings.  I tend to go without playing some computer games for weeks at a time before I get back into them.  If it’s a game like BF2 that is a pain to log into and find a good server, then I generally never play it.  I find it really disappointing how a lot of game makers have moved backwards in usability in regards to how easy the process of actually running the game and getting into are.  BTW, props to the makers of UT3.  The “quick find” feature works fairly reliably for getting into a server fast.  Shame on those developers who have adopted Windows Live login.  Want to cause yourself some frustration?  Try this:

1) Buy Fallout 3
2) Create an offline Windows Live account(You just want to play the game right?  It’s a single player game, so who cares about the internet?  Achievements are just numbers on a screen anyways.), then play your game for awhile saving your progress as you go.
3) Then go out and buy Grand Theft Auto 4, which is another Windows Live game.
4) Again, you just wanna get right into the game, so you don’t want to bother with a Windows Live account, so you use your existing offline account.
5) Play the game for awhile, saving as you go.
6) Decide that you want to play multiplayer GTA4.
7) Now you have to create an online Windows Live account.
8) Upon doing so, you will find that your save games are associated with the other account, and there is apparently no way to transfer them. (Although you can transfer Fallout 3 save games across accounts just by moving files).
9) Additionally, signing in automatically as one account in one game, causes you to also use that account in other games.  Thus you run into this juggling act where you sign into the older offline account to access old saved games, or the newer online account to play online.  When you run a different game, depending on which account you started your progress under, you generally have to sign out, then sign back in with appropriate account.  Good luck remembering which account is which.
10) Bonus round!  In GTA4, multiplayer is accessed via the ingame virtual cell phone.  So you login with your new online account so that you can play online, but you don’t have a game saved, so you must hit Start to begin a new game, and click through the intro, wait for it to load, just so you can bring up the cellphone and activate multiplayer.

Whoever decided to put the multiplayer option as an item on the ingame phone, rather than the main menu, should receive one rabid wolverine in the pants.  You probably thought it’d be “cool” or “clever”.  I’m tired of these developers who don’t know how to put themselves in the shoes of their users.  In this case the developer probably thinks I should go jump off a cliff, and rationalizes that I “can not comprehend innovative concepts like this”.  He or she probably thinks it is so cool that they had the idea to put the multiplayer option on the ingame cellphone.  What they don’t understand is that none of that matters.  This is not some charitable open source product where you can perhaps do it however you want and tell people “if you don’t like it, don’t use it”.  People are paying you to provide them a quality product.  Your users and their experience should be above all, or you will be missing the mark.  If the user’s experience, and your cool idea are ever at odds, then the user’s experience should win out.

The fact is, many people will be tired of single player, as with many other games, but will continue to play the game in multiplayer for years.  This is true for many other games.  The dynamics, humor, and challenge you get when pitted against other humans is totally different than what any AI has yet to provide.  To force upon your multiplayer users these extra steps is absurd.  They will gradually become the majority of the user base as they finish single player or grow tired of it.  Additionally, it is more likely that the users who want to play multiplayer are more likely to be the ones to purchase the game rather than bootleg it, since it is generally very difficult to nearly impossible to play a cracked game online.  So why punish your multiplayer users with a painful user interface, when it is likely that out of all your users, they are more than likely the ones who have paid for the game?

I perhaps am being too mean here, as I’m sure there was some context to it being a good idea, but when you bring it all together, then just taking yourself mentally through a typical use case should have raised some red flags.

March 7, 2009

Quilts

Filed under: Uncategorized — aaronls @ 9:04 pm

The geometry in the little wordpress icons remind me of the quilts my mom used to make.  I used to have lots of ideas for quilt patterns.  I actually always wanted to learn to make quilts, but when I was young my mom didn’t want to take the time to teach me, and now I have too many things going on to pick up another hobby like that.

Blog at WordPress.com.