Wednesday 2 April 2014

Sorting and Efficiency

     I felt that the past couple weeks have been vastly different compared to the first eight or so weeks of the course. It seems that we have stepped into a more mathematical and theoretical component of computer science whereas the first couple weeks were about learning the technical aspects of programming like developing classes, using recursion and generally getting our programs to work by any means necessary, which felt more like a continuation of CSC108. I feel that the material that was explored in the final portion of this course is considerably more valuable than the technical programming skills that we were taught because making code efficient is more difficult (usually) than writing code that works but is inefficient for the most part.

    An interesting concept that we learned this week was expressing the run time complexity (worst case in particular) of some sorting algorithms. In particular, we learned how to express the worst case run time of an algorithm by finding a tight bound for that function. By tight bound, we mean another function that grows like the worst case run time of our algorithm so that we can show how inefficient or efficient a particular algorithm is. When we find a tight bound for the run time of an algorithm, we say that it is a O(insert tight bound here) algorithm, or in words, the run time of that algorithm is in Big-Oh of (insert tight bound here). In intuitive terms, this means that the growth rate of the algorithm is like (insert tight bound here).

    A sorting algorithm whose run time we analyzed is merge sort. This algorithm sorts a list by recursively splitting a list in half, sorting it, and then merging the two halves. This has a complexity of O(nlogn) because we can divide a list of length n log(n) times and then we must merge the sublists which takes n repetitions until we only have one final list. This means we have a complexity of O(nlogn) altogether for merge sort in the worst case (also in the average case). I found merge sort to be a rather unique sorting algorithm because its worst case run time, average case run time, and best case run time are all O(nlogn). I believe that the reason for this is the list will still be recursively split in half and then merged together which takes n repetitions even if the list is already sorted so the best case run time is relatively the same as the worst case run time.

   I found quicksort to be an algorithm that was relatively easy to follow and so I enjoyed it. We pick a pivot value, and move everything less than the pivot before the pivot and everything greater than the pivot after the pivot. Now the pivot is in the right place. We now recursively call quicksort on the sublists to the left of and right of the pivot (much easier to understand than mergesort). The fact that quicksort can have a worst case run time of O(n^2) was kind of intimidating because it made me think of how real programmers have to account for such worst cases and it can often be very difficult. Normally, the run time of quicksort is (logn) which is very fast but choosing the pivot might be tricky. Thankfully, we are computer scientists so we can think of ways to effectively choose a pivot to try our best to avoid the worst case.
   
    I enjoyed this portion of the course significantly more than the first portion because it required more thinking and analysis than just brainlessly trying to get a program to work. However, I did find it intimidating that in the future, I will have to worry about getting my program to run efficiently, something that I didn't really care for before. I also felt that we should've spent more time covering this topic because it is substantially more difficult because we didn't do much analysis of runtimes before and it was hard to grasp immediately. I find it frustrating that we rushed through one of the more complicated portions of the course and I also feel that we spent way too much time on recursion when we should have been learning more about optimizing our algorithms because it will be very important in the future.
 

 

Thursday 20 February 2014

Week 7 - Recursion

    It has been several weeks since I've made a post and I feel that I have made great improvements in my programming skills and understanding of content. For example, I did not understand the property function at all, and I also felt that it was useless. However, after I went back to take a look at it once more, I finally understood its importance. It stops the user from changing an attribute of a class in a way that the programmer would not want it to be changed. We can do this by writing a setter method that is called every time the user tries to assign to an attribute of a class. We would also need a getter method otherwise the private attribute would be unreadable. I have even used the property function within some of my code so I feel that I am comfortable with it. This is just one of several topics that I feel I have progressed in. Others would be handling exceptions, object oriented programming, and inheritance. I have made progressions through several hours of practice, and by working through and completing assignment 1. I believe that lots of practice is the key to doing well in this course, at least it is for me because I often have trouble grasping the concepts at first sight during lecture.

    While I have progressed greatly in some aspects of this course, I still find recursion to be quite challenging. When we first started learning about it, I had absolutely no idea what was going on because I found that calling the function itself within its own definition was just mind boggling. Another reason for my struggles with recursion is that it is somewhat difficult to trace. I feel that there are so many things happening within it that I get confused while tracing it. I don't mean just tracing on paper but rather visualizing and dissecting what happens in each step. I feel that visualizing each step for a function helps me understand where potential problems might lie and how the function actually works but I have a considerably difficult time doing this with a recursive function call. After several weeks of working with it in lecture, the assignment, and in labs, I have understood some parts of a recursive function. For example, I acknowledge that there has to be a base case for the function to eventually reach otherwise recursion depth will be exceeded. With this information, I know that the first step to writing a recursive function is to write the base case first. I start having trouble with the rest of the function after this step. I am now able to write simple recursive functions such as the nested depth of a list or a sum of the elements in a nested list but I still require too much time to write some of the slightly more complex recursive functions. I believe that I require more practice with writing recursive functions because I have hardly attempted them outside of labs, lecture, and the assignment. I know that writing recursive functions is an essential tool for programming and so I will continue to practice with them until I am fully comfortable with them.

Sunday 26 January 2014

Week 3 SLOG: Object-Oriented Programming

     Last term in CSC108, I believed objected-oriented programming to be the most useless, tedious and annoying part of the course. I did not understand the purpose of classes, as they seemed to be a much more complex and inconvenient way of creating functions. While I acknowledge that I do not have a thorough understanding of object-oriented programming, I believe I have made strides in understanding its purpose. An instance of a class has specific data and attributes that are unique to that class. These features assist greatly in keeping everything organized. For example, I would imagine that Facebook performs well because of its usage of classes. A profile, for example, could possibly be a class that Facebook uses to create profiles for people that sign up for it. It would have attributes such as number of friends, age and date of birth and methods such as adding another user as a friend. I could not imagine how this would be accomplished without the usage of classes since the code would be much more complex and disorganized. This example is just to show that I have a new perspective on classes and their usefulness. I find it interesting that classes can inherit properties of other classes, although I do not understand the purpose of this since we could just have one class with all the properties of similar classes. With all the great benefits of classes, I still do believe that they are somewhat complex to develop at times. For example, I find that having to create __str__ and __eq__ methods very inconvenient and somewhat time consuming. I am also moderately uncomfortable with implementing such methods because I do not understand how they work. For example, I don't quite understand why using the == operator on two instances of a class will check if their memory addresses are equal rather than their values. This is one of the several aspects of classes that I have trouble comprehending.

      I believe I can solve my troubles with object-oriented programming by consistent practice with developing them, and by analyzing its properties thoroughly from top to bottom so that I will understand why they work the way they do. I believe that this solution is working so far because I have a thorough understanding of how some of the features in object-oriented programming works. For example, I know that the __init__ method is used to give an instance of a class its specific attributes and data, depending on the class of course. This was an example of a concept that I was struggling with before but now I am very comfortable with. I suspect that the reasons for my problems with object oriented programming are because I started working with them before really understanding why and how they work. With this post, I hope to have conveyed my thoughts on object-oriented programming rather than reiterating its several features.