Wednesday, May 4, 2011

And That's a Wrap

  And here we are at the end of the spring 2011 semester, Communicating Science ending with it. As such, this will be my last post for the blog project and for ENIAC Beyond. I may start it back up again when I get settled into the computer science industry, but that is probably far into the future. Maybe I'll be able to post about technologies and systems I'm working on instead of finding articles and stories about great things people have already done.
  Writing has never really been a strong suit of mine. It's always been a struggle, staring at a computer screen for hours to put down a few well placed sentences. I still remember writing my first two blog posts, a simple welcoming post and an overly technical piece about the progress of processing power over the decades. I had stayed up til 4 in the morning to polish off the posts, not a very good feeling after looking back and seeing how entrenched in the deficit model the processing post turned out to be.
  I caught a lucky break with Watson's appearance on Jeopardy! The event was easily observable and entertaining, able to catch attention quickly. These posts were where I slowly started to get in to the rhythm of posting with engaging stories and events. Proceeding posts tried to take from intractable sources or tried to be a bit more than a wall of text to bore the reader. The writing process never really got any easier, each time I sat down to write a blog post was a long drawn out process, but I was feeling better about my posts as time went on.
  Thanks for reading my blog. It's definitely been a learning experience and a good introduction into the difficulty and importance of effective science communication. Hopefully I can take what I learned here and apply it to all that I do in the future.
  For now, I'm signing off from the blogging world. You stay classy, San Diego.

Monday, May 2, 2011

Ludum Dare: Game Creation in a Weekend

  Just this past weekend, a community known as Ludum Dare held their twentieth game creation competition, all in the span of 48 hours. Over the weekend, each game was created by a single programmer who generated each and every asset to the game based around a central theme: graphics, sounds, and player interactions from scratch and pulls it all together for the competition. There are no prizes for winning but it gives a great reason for people to come together and just create some small, quirky, hopefully fun games. Over 350 games were created over the weekend. If you have the time during these last few weeks of school, fire one or two up and give them a shot (some may require some special compilation to work, but others run through flash in your web browser). You may be pleasantly surprised.

More Algorithm Visualizations

  A little over a week ago I wrote a post about a college in Europe that used Hungarian folk dance to explain a few basic sorting algorithms. Today, I ran into another site that does an excellent job showing exactly how different algorithms work, maybe not with as much toe tapping as with the folk dance but effective none the less. Associate Professor David Galles from the University of San Francisco filled his site with java script animations of various algorithms, each very important for a solid base in computer science.
  First are data structures such as queues and stacks, where you can push values into the structure and pop them off. Stacks are extremely important for programming as they are used to keep track of function calls (when a new function is called, it is placed on top of the calling function. The called function performs its tasks and then is popped off the stack, giving precedence back to the calling function). Queues can be used as a method of resource allocation, giving out resources in a first come first serve approach (which is a method that is far too basic for real resource allocation, but that's not the point).
  Indexing is a complex issue dealing with storing and retrieving information. Given a piece of information, what is the best way to store it quickly while being able to retrieve it just as quickly. Each method uses a different point from which to start from and then moves from there, such as the binary tree uses the first node as a root node and uses comparisons to place additional data, passing it down the left when it is smaller than the current node or to the right if it is larger. Hash mapping uses the value of the data to place it in different bins, where hopefully it's quicker to step through a portion of the data than the entire set of data.
  The sorting algorithms are mostly what I've showed before with a few new ones added. Radix and bucket sort are pretty interesting fare, using number placement as well as value for sorting the values. A problem that arises with these new sorting algorithms is they require additional resources to hold partially sorted data, increasing overhead costs. Heaps are a type of data storage that keeps the largest value as the root node, with each child smaller than its parents and can be used as a sorting structure using heapsort.
  The graph algorithms are used when given a graph of nodes with some sort of cost associated with moving between nodes. A depth first search will find a path between a start and an end node by moving until it reaches a dead end, then backtracks to the previous node until it finds a different route. A breadth first search sends out paths every direction, each level of depth being explored simultaneously. Dijkstra's and Prim's algorithms are generally used to find shortest or least costly paths between two nodes. I found another site that gives a good visualization on how Dijkstra's algorithm works.
  These visualizations can give an excellent view in to how these algorithms actually function. Playing around with them can help clear up any uncertainties on how these algorithms work, so give it a shot.