Thursday, May 20, 2010

What Language To Learn First?

When looking at various programming forums I see this question over and over. Many responders usually jump in suggesting languages from C to Java to Ruby to Lisp. Well, maybe not Lisp but you get the point. The problem with so quickly answering the above question is that I think the answer is:
  • dependent on the individual
  • learning a SINGLE language should not be the goal
Personally, I learn by seeing the details. I don't have any problem working at an abstract level (or with infamous car analogies), but before I'm completely comfortable at any given level I need to see the details of what is going on behind the scenes. An example of my personal learning style comes from my first compsci class in college where Turbo Pascal was used to teach object oriented programming. As an aside, the school soon after moved to Java, but at the time teaching object oriented programming so early on in a curriculum was fairly progressive. A key part to any compsci curriculum is the teaching of pointers. Pointers are generally considered a complicated topic, so instead of just explaining what they were in detail the professor instead built these odd analogies. Today I don't exactly remember the analogies, but I still remember them more confusing that helpful even after fully understanding pointers. I do recall though that they had something to with houses and phone lines. :)

After struggling for days trying to understand the analogies that were being used I asked one of my professors to explain in detail just what is a pointer. My professor proceeded to draw a diagram on the whiteboard similar to the one in this wiki article, to which I immediately asked, "that's it?!" All of the methods that my professors to shield me from the details in an effort to help me understand the topic were actually getting in the way of how I learned. I needed to see the details, and at that point my understanding was immediate. Because of my need to see the details in order to understand the whole, I would consider myself a person who learns from the bottom up.

For a person who learns from the bottom up and wants to take things apart and see the details of how it works before understanding the whole, a language like C is the perfect language to start with. C has a concise, fairly simple syntax and hides very little of what is happening on the hardware underneath.

Now, contrast this with a person learns from the top down. This person needs to see the entire picture before looking at the details. This person may never look at the details unless they are forced. This person may also be perfectly content to stay at higher levels of abstraction and often doesn't care what is going on behind the scenes as long as the abstractions work. In essence this person is a top down learner. He wants to understand the entire picture before delving into the details and then only delving as far as absolutely necessary.

For a top down learner, starting with a language like Ruby is a great choice. Using Ruby, the top down learner can see immediate results and hide a lot of the details of what is going on behind the scenes. The person can learn to create programs early on without being bothered to understand the nuances of what is happening on the computer hardware to make the programs work.

At this point I want to state that I'm not saying one type of learning is better than the other. In my opinion both types of students need to learn the same languages, but it is important to realize that people have different learning styles that should be accommodated. This brings me to my second point, that the question should not be about what single language to learn first, but what set of languages to learn and in what order.

I've made the case above that some people learn from the bottom up and some from the top down. This does not mean that they should end up learning different languages though, and at some point should be equally proficient in a similar set of languages. A person who learns from the bottom up when it comes to a single language should apply the same strategy to a set of languages. This means that this student should learn C, then a language like Java or C#, and then move onto the higher level languages like Ruby and Python (then onto functional languages, but I consider those beyond the scope of this article). This person ends up applying their single language learning strategy to the larger problem of learning to program in general.

For the person who learns from a top down approach they would simply flip the order of languages. Start with Ruby, then move to C#, and finally move to C making sure to understand how each level decomposes from the level above or below the current level.

Whether a student goes from the top down or bottom up, both should pay special attention what each abstraction level brings to the table. Where are the leaks in the abstraction? How are they solved? What problems are best solved by each level of abstraction? What are the shortcomings of each language? Advantages? How are the languages the same and different? By researching and answering questions like these the student will gain a much deeper understanding of all the languages they learn.  Each time the student learns a new language they will see new ideas and concepts in languages they thought they already knew.

I hope the next time someone asks what programming language to learn first I can point them to this article because the answer isn't as simple as C/C++/Ruby/Java/Perl/Python/....

Saturday, May 8, 2010

Objective-C Protocols and Delegates

My earlier article about building a currency formatter for a UITextField generated a few comments and some confusion about how to use delegates in iPhone programming. In this article/tutorial I hope to clear up any confusion and tie it back to using the the currency formatter with the UITextField.

What Is A Protocol?

The Apple documentation on a protocol can be found here. The documentation summarizes that:
Protocols declare methods that can be implemented by any class. Protocols are useful in at least three situations:
  • To declare methods that others are expected to implement
  • To declare the interface to an object while concealing its class
  • To capture similarities among classes that are not hierarchically related
The situation that we are interested in is the first one that allows methods to be declared that others are expected to implement. In this situation protocols are very similar to Java and C# interfaces. In our particular case Apples UIKit framework provides a protocol for use with the UITextField. This protocol is conveniently named UITextFieldDelegate. Defined in this protocol are 7 optional methods that a program can implement to change the behavior of a UITextField. Keep in mind that Apple has only provided the skeleton of the method and not any actual implementation. The implementation is left up to the programmer to implement for their specific needs. In our case we only need to implement 2 of the methods.

So now we have all we need to create a stubbed out version of our UITextFieldDelegate protocol implementation:

The @interface line is where we tell the compiler that we are implementing the UITextFieldDelegate with this class. Keep in mind that it doesn't have to be a separate class like here, and could instead be the controller or any other class where you think the delegate methods should be implemented. I like doing a separate class since it makes the delegate easy to reuse and keeps the code modularized.

Since we have not yet provided any implementation, if we took this above code as is and made it the delegate of a UITextField the text field would work the same way it does without a delegate attached. Since this post is not about the implementation of the currency format delegate I'm going to point readers to my previous post on implementation and move on to how to use the delegate.

What Is A Delegate?

Once again the Apple documentation is quite good at explaining delegation. In delegation programming one object (in our case the UITextField) relies on another object (the CurrencyFormatDelegate) to provide a certain set of functionality. As the Apple documentation explains, using delegation programming allows the programmer to customize the behavior of framework objects without having to subclass. In turn this allows the programmer to aggregate multiple delegates into one object and extend the functionality of the existing framework. Now, what does all this mean for the currency formatter? In order to get our delegate to execute at the proper time we have to attach it to our UITextField. Assume that we have already defined and linked a UITextField in Interface Builder and we called it currencyTextField. In the controllers viewDidLoad we just need to assign the CurrencyFormatDelegate our UITextFields delegate property.

You'll notice a bit of object management code to make sure that we properly allocate and release the delegate when we are done with it. The key line in all of the controller code is:

It is here where we tell the text field to user our delegate and call our implementation of the protocol methods.

I hope that this clears up some of the issues people were having while trying to use my text field currency formatter. The code above should mostly work as is since I took it from a working program and pared it down for the post. If there are any copy and paste issues they shouldn't be too hard to correct.

Further Reading

The delegate pattern is a fundamental object oriented design pattern. If you want to know more about the delegate pattern and design patters in general check out Design Patterns.

Thursday, May 6, 2010

Are Social Obligation Games Psychological Malware?

Years ago I played a game called Dark Age of Camelot (DAoC). It was my first foray into the MMORPG genre of video games. While the game had its fun moments where strategy and some thinking was required, many parts of it were very repetitive. Leveling up was one of these boring repetitive tasks that you had to do in order to progress in the game. Players lovingly called this process grinding. Over time I found myself playing the game more and more even though I dreaded logging in. I had become addicted to the game much like Everquest players had become addicted to it in the past. When I stopped playing I tried to figure out why I had played so long.

I used to think that I was addicted DAoC through classical conditioning. This works by rewarding the player often early on and slowly spacing out the rewards to require more and more time to complete them. Eventually you can get a player to play for days and days in order to accomplish a single task. My friends and I used to jokingly refer to the process as 'give me the pellet' referring to science experiments where rats are rewarded with food pellets for completing some mundane task. Something always bothered my about this analysis though. I have played games throughout my life and never had a problem before with playing too much. Was this classical conditioning response on its own enough to keep my playing?

The other main game mechanic that makes MMOs different from other games is that you play in a persistent world with other people. To accomplish anything worthwhile in these games you generally need to befriend and group with other people on a near constant basis. As you progress in the game the people you group with become more and more important and you start feeling obligated to log in and help your friends that you have made in game. The game plays to an individuals social responsibility to keep people coming back so as not to let down their friends.

In DAoC and other traditional MMOs I don't believe this social aspect is a blatant attempt to keep people playing. There really is an underlying game that can be enjoying to play, and I think the social aspects are there as another avenue to make the game more enjoyable. In the last few years though a new category of game has arisen and it turns out that its primary draw is from the social obligation to play.

Farmville is currently the most popular game in America with between 70 and 80 million people playing through their Facebook accounts. I hesitate to call it a game since it really isn't much of a game at all. An excerpt from this article best describes why people play Farmville:

The secret to Farmville’s popularity is neither gameplay nor aesthetics. Farmville is popular because in entangles users in a web of social obligations. When users log into Facebook, they are reminded that their neighbors have sent them gifts, posted bonuses on their walls, and helped with each others’ farms. In turn, they are obligated to return the courtesies. As the French sociologist Marcel Mauss tells us, gifts are never free: they bind the giver and receiver in a loop of reciprocity. It is rude to refuse a gift, and ruder still to not return the kindness.[11] We play Farmville, then, because we are trying to be good to one another. We play Farmville because we are polite, cultivated people.

Where traditional MMOs addicted geeks with a combination of gameplay, competition, and social obligation, Farmville has dropped all other aspects of game play and ropes in its users by entangling them in a complex network of social obligations. Much like a con man plays to our human nature to generally trust others, Farmville is using our basic nature to be nice to others and reciprocate kindness in order to keep people playing.

In my mind a couple questions then arise with respect to Farmville and software design in general. First, is it ethical to design software in such a way that it provides little value to users other than locking them into a web of social obligations? I'm guessing many users do not even realize what has occurred when they are planning their day around planting, harvesting, and making sure their friends also do their planting and harvesting. At what point does a game like Farmville cease being a game and become a type of psychological malware for the users players?

Second, even if people agree that Farmville's use of social obligations may not be ethical, could there a social benefit in other types of software applications. Software is not written in a vacuum, and without users it is essentially useless. Could adding more social obligations to useful software be beneficial to users by getting users to use the software. Could the introduction of social obligations somehow get users to better maintain their computers by staying up to date with patches and virus software?

Ultimately while I think Zygna is a company preying on peoples good nature, it does not automatically mean that keeping users through a social obligation is ethically wrong and it must be looked at on a case by case basis.