And here we are at the end of the spring 2011 semester, Communicating Science ending with it. As such, this will be my last post for the blog project and for ENIAC Beyond. I may start it back up again when I get settled into the computer science industry, but that is probably far into the future. Maybe I'll be able to post about technologies and systems I'm working on instead of finding articles and stories about great things people have already done.
Writing has never really been a strong suit of mine. It's always been a struggle, staring at a computer screen for hours to put down a few well placed sentences. I still remember writing my first two blog posts, a simple welcoming post and an overly technical piece about the progress of processing power over the decades. I had stayed up til 4 in the morning to polish off the posts, not a very good feeling after looking back and seeing how entrenched in the deficit model the processing post turned out to be.
I caught a lucky break with Watson's appearance on Jeopardy! The event was easily observable and entertaining, able to catch attention quickly. These posts were where I slowly started to get in to the rhythm of posting with engaging stories and events. Proceeding posts tried to take from intractable sources or tried to be a bit more than a wall of text to bore the reader. The writing process never really got any easier, each time I sat down to write a blog post was a long drawn out process, but I was feeling better about my posts as time went on.
Thanks for reading my blog. It's definitely been a learning experience and a good introduction into the difficulty and importance of effective science communication. Hopefully I can take what I learned here and apply it to all that I do in the future.
For now, I'm signing off from the blogging world. You stay classy, San Diego.
ENIAC Beyond
Wednesday, May 4, 2011
Monday, May 2, 2011
Ludum Dare: Game Creation in a Weekend
Just this past weekend, a community known as Ludum Dare held their twentieth game creation competition, all in the span of 48 hours. Over the weekend, each game was created by a single programmer who generated each and every asset to the game based around a central theme: graphics, sounds, and player interactions from scratch and pulls it all together for the competition. There are no prizes for winning but it gives a great reason for people to come together and just create some small, quirky, hopefully fun games. Over 350 games were created over the weekend. If you have the time during these last few weeks of school, fire one or two up and give them a shot (some may require some special compilation to work, but others run through flash in your web browser). You may be pleasantly surprised.
More Algorithm Visualizations
A little over a week ago I wrote a post about a college in Europe that used Hungarian folk dance to explain a few basic sorting algorithms. Today, I ran into another site that does an excellent job showing exactly how different algorithms work, maybe not with as much toe tapping as with the folk dance but effective none the less. Associate Professor David Galles from the University of San Francisco filled his site with java script animations of various algorithms, each very important for a solid base in computer science.
First are data structures such as queues and stacks, where you can push values into the structure and pop them off. Stacks are extremely important for programming as they are used to keep track of function calls (when a new function is called, it is placed on top of the calling function. The called function performs its tasks and then is popped off the stack, giving precedence back to the calling function). Queues can be used as a method of resource allocation, giving out resources in a first come first serve approach (which is a method that is far too basic for real resource allocation, but that's not the point).
Indexing is a complex issue dealing with storing and retrieving information. Given a piece of information, what is the best way to store it quickly while being able to retrieve it just as quickly. Each method uses a different point from which to start from and then moves from there, such as the binary tree uses the first node as a root node and uses comparisons to place additional data, passing it down the left when it is smaller than the current node or to the right if it is larger. Hash mapping uses the value of the data to place it in different bins, where hopefully it's quicker to step through a portion of the data than the entire set of data.
The sorting algorithms are mostly what I've showed before with a few new ones added. Radix and bucket sort are pretty interesting fare, using number placement as well as value for sorting the values. A problem that arises with these new sorting algorithms is they require additional resources to hold partially sorted data, increasing overhead costs. Heaps are a type of data storage that keeps the largest value as the root node, with each child smaller than its parents and can be used as a sorting structure using heapsort.
The graph algorithms are used when given a graph of nodes with some sort of cost associated with moving between nodes. A depth first search will find a path between a start and an end node by moving until it reaches a dead end, then backtracks to the previous node until it finds a different route. A breadth first search sends out paths every direction, each level of depth being explored simultaneously. Dijkstra's and Prim's algorithms are generally used to find shortest or least costly paths between two nodes. I found another site that gives a good visualization on how Dijkstra's algorithm works.
These visualizations can give an excellent view in to how these algorithms actually function. Playing around with them can help clear up any uncertainties on how these algorithms work, so give it a shot.
First are data structures such as queues and stacks, where you can push values into the structure and pop them off. Stacks are extremely important for programming as they are used to keep track of function calls (when a new function is called, it is placed on top of the calling function. The called function performs its tasks and then is popped off the stack, giving precedence back to the calling function). Queues can be used as a method of resource allocation, giving out resources in a first come first serve approach (which is a method that is far too basic for real resource allocation, but that's not the point).
Indexing is a complex issue dealing with storing and retrieving information. Given a piece of information, what is the best way to store it quickly while being able to retrieve it just as quickly. Each method uses a different point from which to start from and then moves from there, such as the binary tree uses the first node as a root node and uses comparisons to place additional data, passing it down the left when it is smaller than the current node or to the right if it is larger. Hash mapping uses the value of the data to place it in different bins, where hopefully it's quicker to step through a portion of the data than the entire set of data.
The sorting algorithms are mostly what I've showed before with a few new ones added. Radix and bucket sort are pretty interesting fare, using number placement as well as value for sorting the values. A problem that arises with these new sorting algorithms is they require additional resources to hold partially sorted data, increasing overhead costs. Heaps are a type of data storage that keeps the largest value as the root node, with each child smaller than its parents and can be used as a sorting structure using heapsort.
The graph algorithms are used when given a graph of nodes with some sort of cost associated with moving between nodes. A depth first search will find a path between a start and an end node by moving until it reaches a dead end, then backtracks to the previous node until it finds a different route. A breadth first search sends out paths every direction, each level of depth being explored simultaneously. Dijkstra's and Prim's algorithms are generally used to find shortest or least costly paths between two nodes. I found another site that gives a good visualization on how Dijkstra's algorithm works.
These visualizations can give an excellent view in to how these algorithms actually function. Playing around with them can help clear up any uncertainties on how these algorithms work, so give it a shot.
Thursday, April 28, 2011
Program Like in the Movies
If you've ever wanted to wanted to hack like in the movies but never thought your programming skills were up to snuff or couldn't type a thousand words per minute, have some fun with Hacker Typer. Select the file you want to generate and then go nuts on the keyboard. In seconds you'll transform into a Hollywood programmer, typing perfect code impossibly fast. If the code generated seems pretty cryptic, don't worry, even I had to look up to figure out what the second and third file choice was. Mobile Substrate is used by third party groups to patch system functions on the iPhone and fini.sh is a type of Linux shell, or operating systems environment, but I couldn't find any real information about it.
But the first choice is the one I want to talk about, the Linux kernel. A kernel is the core of the operating system, the bridge between software applications and hardware level devices. All user generated resource requests, things like saving to disk space or loading up a program to be placed in memory for quick access, are directed through the kernel. The kernel then takes these requests and generate systems calls to these devices and returns the necessary information back to the application to be passed on to the user.
Direct kernel interaction can be dangerous without the proper knowledge. Kernels have the least amount of restrictions on what they can and cannot access in the computer. They act as resource managers for the central processing unit, memory, and I/O devices, keeping applications and devices from interfering with each other that could cause catastrophic crashes while doling out resources for system requests quickly and efficiently.
Kernel coding is considered one of the more difficult of any coding practice, as it requires a strict finesse to utilize the raw power of the kernel while not damaging any critical areas of the computer. So using Hacker Typer as a quick sandbox just for fun can be a nice distraction.
But the first choice is the one I want to talk about, the Linux kernel. A kernel is the core of the operating system, the bridge between software applications and hardware level devices. All user generated resource requests, things like saving to disk space or loading up a program to be placed in memory for quick access, are directed through the kernel. The kernel then takes these requests and generate systems calls to these devices and returns the necessary information back to the application to be passed on to the user.
Direct kernel interaction can be dangerous without the proper knowledge. Kernels have the least amount of restrictions on what they can and cannot access in the computer. They act as resource managers for the central processing unit, memory, and I/O devices, keeping applications and devices from interfering with each other that could cause catastrophic crashes while doling out resources for system requests quickly and efficiently.
Kernel coding is considered one of the more difficult of any coding practice, as it requires a strict finesse to utilize the raw power of the kernel while not damaging any critical areas of the computer. So using Hacker Typer as a quick sandbox just for fun can be a nice distraction.
Tuesday, April 26, 2011
Computer Greats: Grace Hopper
One I the things I love about today's technology is its ability to pull things that would seem like they were lost in the void of time, never to be seen again. I was wandering around and I happen to stumble upon something I thought I'd never see. An uploaded set of two videos on Youtube of a 60 Minutes interview of Grace Hopper from nearly thirty years ago.
Grace Hopper was one of the greatest pioneers of the early computer science and a powerful force in communicating computer science. She was one of the programmers for the Harvard Mark I in 1944, which is landmarked as the beginning of the modern computer era. In 1949, she assisted in the development of the UNIVAC I, the very first commercially produced computer. While working on the UNIVAC system, Hopper created the very first compiler for electronic computer systems. Compilers are incredibly important for modern programmers as they allow to write human readable code which is then translated by the compiler into machine code that the computer is able to run. Later, Hopper participated in the creation of the widely used language COBOL, based on a simpler language she had created earlier. After her retirement from the Navy, Hopper went on a lecture circuit, educating students, military personnel, and business leaders about computer science.
0:03 Grace laces her talks with a bit of humor and personal history, techniques we've seen as positive speaking techniques in Communicating Science.
1:35 Grace describes the current frame of the computer science revolution, remarking it is still in its infancy
3:49 Grace hands out bits of wire to her crowds to give them an understanding of a billion. The wire is cut to the distance light travels in a vacuum in a billionth of a second. She then pulls out a cord of wire nearly a thousand feet long to compare to a micro second.
The second part of the interview turns a bit away from her contributions and work in the field of computer science and more towards military, politics, and gender.
Grace Hopper is one of the most important figures in the dawning of the modern computer era. It was a wonderful find to see these videos uploaded of such a keen lady.
Grace Hopper was one of the greatest pioneers of the early computer science and a powerful force in communicating computer science. She was one of the programmers for the Harvard Mark I in 1944, which is landmarked as the beginning of the modern computer era. In 1949, she assisted in the development of the UNIVAC I, the very first commercially produced computer. While working on the UNIVAC system, Hopper created the very first compiler for electronic computer systems. Compilers are incredibly important for modern programmers as they allow to write human readable code which is then translated by the compiler into machine code that the computer is able to run. Later, Hopper participated in the creation of the widely used language COBOL, based on a simpler language she had created earlier. After her retirement from the Navy, Hopper went on a lecture circuit, educating students, military personnel, and business leaders about computer science.
0:03 Grace laces her talks with a bit of humor and personal history, techniques we've seen as positive speaking techniques in Communicating Science.
1:35 Grace describes the current frame of the computer science revolution, remarking it is still in its infancy
3:49 Grace hands out bits of wire to her crowds to give them an understanding of a billion. The wire is cut to the distance light travels in a vacuum in a billionth of a second. She then pulls out a cord of wire nearly a thousand feet long to compare to a micro second.
The second part of the interview turns a bit away from her contributions and work in the field of computer science and more towards military, politics, and gender.
Grace Hopper is one of the most important figures in the dawning of the modern computer era. It was a wonderful find to see these videos uploaded of such a keen lady.
Wednesday, April 20, 2011
Sort Your Heart Out
Sorting is a fundamental issue for computer scientists. Given a list of unordered objects and a way to compare them, what is the best way to sort them into a logical order? There are many different answers to this question out there, each ranging in simplicity and efficiency. The Sapientia University of Romania took a few of the most simple sorting algorithms and showed how they work in a fairly novel way, through the magic of dance. Using a local Central European folk dance team and a folk band, the team is able to show off some of the more simple sorting algorithms in a little livelier fashion that most demonstrations of sorting.
The first and simplest of sorts is the bubble sort. Bubble sort starts at the beginning of the list and compares the first two objects. It does nothing if those two objects are in the correct order or swaps them if they are not. The algorithm then compares the second and third elements in the list, swapping when necessary and forcing the higher number up the list, "bubbling" it towards where it needs to go. This iteration continues until the sort reaches the end of the list where it goes back to the beginning, starting the sort all over again. This is repeated until an entire pass across the list has no swaps. The algorithm is complete and ceases operation.
Insertion sort is another simple algorithm. Starting at the beginning of the list, the algorithm "separates" the first element into what is considered a sorted list, where by itself, the element is sorted. It then takes the next element in the list and places it into the correct location inside the sorted list. This continues down the list of objects until a fully sorted list is produced.
Akin to the bubble sort and insertion sort, shell sort compares two elements half the size of the list together and swaps if needed. If the elements were swapped, then the gap is used again to compare the elements before to check for needed swaps. After comparing all elements with this gap, it once again iterates over the list using a smaller gap and continues until the list is sorted, the final iteration through the list being exactly like insertion sort.
Selection sort takes a slightly different approach, only swapping when it knows it has the correct value for that position in the list. The algorithm iterates over the entire list to find the lowest value in the list, then swaps that value with whatever element is in the first position of the list. The algorithm then repeats the process with the next lowest value in the list, placing it in the second position in the list. The process is continued until the list is sorted.
The first and simplest of sorts is the bubble sort. Bubble sort starts at the beginning of the list and compares the first two objects. It does nothing if those two objects are in the correct order or swaps them if they are not. The algorithm then compares the second and third elements in the list, swapping when necessary and forcing the higher number up the list, "bubbling" it towards where it needs to go. This iteration continues until the sort reaches the end of the list where it goes back to the beginning, starting the sort all over again. This is repeated until an entire pass across the list has no swaps. The algorithm is complete and ceases operation.
Insertion sort is another simple algorithm. Starting at the beginning of the list, the algorithm "separates" the first element into what is considered a sorted list, where by itself, the element is sorted. It then takes the next element in the list and places it into the correct location inside the sorted list. This continues down the list of objects until a fully sorted list is produced.
Akin to the bubble sort and insertion sort, shell sort compares two elements half the size of the list together and swaps if needed. If the elements were swapped, then the gap is used again to compare the elements before to check for needed swaps. After comparing all elements with this gap, it once again iterates over the list using a smaller gap and continues until the list is sorted, the final iteration through the list being exactly like insertion sort.
Selection sort takes a slightly different approach, only swapping when it knows it has the correct value for that position in the list. The algorithm iterates over the entire list to find the lowest value in the list, then swaps that value with whatever element is in the first position of the list. The algorithm then repeats the process with the next lowest value in the list, placing it in the second position in the list. The process is continued until the list is sorted.
Monday, April 18, 2011
On a Lighter Note
A few weeks ago I wrote a blog post about a comic I'd run across in Dinosaur Comics which talked about lazy posting. Dinosaur Comics are definitely a good read but doesn't have much connection to computer science with the exception of a wild one here or there. I thought it would be a good idea to share some other comics I've run across that largely deal with computer science and I enjoy a lot.
Abstruse Goose is the most recent one I've read. It's a comic with no real overarching presence but has a pretty strong connection to computer science, art, and mathematics. This comic sums up my college career pretty well.
xkcd by Randall Munroe is a well known and well acclaimed web comic that has a very heavy focus on math, physics, and computer science, although it does get sidetracked sometimes with some rather sappy comics that can feel a bit off track. He also has a lengthy blog of his own with tons of mathematics and science to read to your hearts content.
Saturday Morning Breakfast Cereal by Zach Weiner is my favorite of the comics listed here. Updated daily, Weiner is able to produce consistently funny comics over a wide range of subjects, from philosophy to economics, from politicians to scientists, and inject enough of his own goofy humor into it that it's always a great read. He even has a comic about communicating science!
Abstruse Goose is the most recent one I've read. It's a comic with no real overarching presence but has a pretty strong connection to computer science, art, and mathematics. This comic sums up my college career pretty well.
Abstruse Goose |
xkcd |
Saturday Morning Breakfast Cereal |
Subscribe to:
Posts (Atom)