Friday, August 31, 2007

New Status: Moderator

This is AWESOME!!! Not only is today my birthday, but I was given an unexpected present today.

I have been actively posting and participating in the Perl forum on since about May of this year. I have only posted one or two questions myself, but spend 95+% of my time on their answering others questions as some of them challenge me to research and learn before answering.

Well, I made a bid to the main Moderator for the Perl forum about a week and a half ago to join him in the duties. Well, he put me in for it the other day and this morning I was delighted to find that I had been awarded the honor.

This is GREAT!! I have since been cleaning up posts, moved one posting, and just all around having a blast with my new found moderator status. No, I am in no way abusing it, but I am trying to keep up on all the goings on so as to keep the forum running as smooth as possible.

Happy Birthday to ME!!!

Tuesday, August 21, 2007

Opinions, opinions

We have all seen the postings on almost every single forum that is out there. You know what I am speaking of, those postings that read, "What is the best IDE for BLAH language?". If you have read any of the to those questions, then you are fully aware that it is a "my opinion is better than your opinion" atmosphere.

You may get a couple of responses in the beginning, where people are telling you what IDE they use, the pros, the cons, and why they think it is so wonderful, but then it starts. You get this multitude of fascist dictator types that absolutely insist that "there is no better IDE than (input editor here) and that all other editors are crap in comparison!". You even have the old school folks, some of whom can remember creating punch cards, who believe that command line editors or vi are the best editors.

If you are one of those that is getting ready to ask that time-(de)tested question of "Which IDE is better for ...?", then just DON'T!

Here is what I believe, and no, I am not going to go and follow the masses, preaching what I think is the best editor. Instead I am going to sum it up with this..... try them all. Download and install a number of editors. Play with them, write code with them, debug with them, get to know them. While you are doing this, take notes on what you like and dislike about each one. Then, when you are done, compare all of your notes. You have to not only look at the notes, but you have to think to yourself, "will I still like this editor in a 6 months? a year? " The answer may very well be, I don't know.

I am old-school unix. I believe that the command line rules and vi is the best day-to-day editor. All of the coding that I have learned has been by hand. I prefer not to learn with a fancy, shmancy do-it-all-for-you editor as I won't learn anything. I like learning a new language in an editor like vi because I get to debug my code by hand and not rely on a program to tell me what is wrong. This allows me to assess the errors and get my coding (by hand) down to a science. After I am more than comfortable, then I migrate to a more comfortable editor that will save me time.

While vi will always have a place in my heart and my editing world (being the first editor I used on Unix), I must say that I have leaned toward Active State's Komodo for my day to day coding in Perl ( and other misc languages, including HTML). Yes, some will tell you it is a beast and clunky slow. Personally, it takes a minute to start up, but after that, I don't have any issues. I don't have this insatiable need to have my editor at my fingertips within a nanosecond of clicking on the link to launch it. I am patient enough that I can wait the 30 or so seconds that it takes to launch. I use it because I like its syntax highlighting, code sense (hints, kind of like Micro$oft's Intellisense), and overall comfortable feel.

That my friends, is what I think the key is..... comfort! You have to pick an editor that you like and not listen to the skewed views of the mass critics out there.

In a posting to the Boston Linux User's group, Uri Gutman wrote, "so my main point is that coders need to be smarter about their analysis, architecture and design and less caught up in tools like IDE's and syntax highlighting. you can have the greatest IDE in the world and
still come up with crappy code and solutions. whereas a good software design can be written and debugged with any set of tools."

That is one of the best statements that I have read on the subject and it is something that I have believed in for some time. If you aren't able to write good code and be able to debug it thoughtfully, then no editor in the world is really going to help you!

Happy coding!

Tuesday, August 14, 2007

Things that I learned yesterday

If there is one thing about being a geek that keeps me going every day, is that saying, "You learn something new every day!". Why? Because it is so true. Being a Perl developer has me writing more code than I can keep up with, and I love it. But the best part is, through all of the coding that I am doing, I seem to learn a minimum of one new thing with each program I produce. Now, "1 new thing" seem pretty low, and don't worry, it is as lately it has been a few new things each time, but learning 1 new thing every day keeps the mind in good shape.

For instance, I was working on a script last week that took a file and parsed out of it a string that was ocourring ( that should not have been there). Well, my script removed said offending lines to another file for "safe keeping", while outputting the good lines to their own file. To make sure that everything worked correctly, I had to balance the new file to ensure that ONLY the offending lines were removed and all other lines ended up in the new file.

So, I delved into the File::Util module, which has a function called line_count() in it, which takes a file as input and outputs the number of lines in the file. What I discovered was that the function was working fine with the first file processed (the original file), but on each subsequent file ( the offending lines file and the new outputted file), the counts were totally off, even so much as the offending lines file's count being zero (0).

So, I emailed the developer who produced the module to get his advice and see if there was an issue with the module. After he did his typical tests and did not discover anything wrong, he came back to ask me to ensure a couple of things:

1. That I ran the close() function on each file handle before actually acting upon the file that each file handle was referencing. Well, this was definitely an issue. I had the close routines after everything was said and done. So, I migrated them to close the file handle(s) before doing the line count.

2. He asked me to turn off buffering for I/O. I was a little new to that and asked him to explain further. He said that all I had to do was to set the variable "$|" to any true value:

ie: $| = 1;

This would tell Perl that, instead of storing date in memory, that the data going to file handles would go directly to the file handles and not get stored in memory. This not only ensures that all data is written to the file handle(s) as it should be, but also has the added benefit of clearing up any memory usage from the stored information. Also, one other note, you need to set this variable at the beginning of your script so that everything in the script is effected.

So, after modifying the auto flush variable and closing all file handles, the function seemed to work just fine and outputs everything perfectly.

Many, many thanks to Tommy Butler, the author of the File::Util module on CPAN. Without his help, I would probably still be scratching my head over the issue. Now though, I have a bit more knowledge and experience with which to draw upon with my next project.

Wednesday, August 01, 2007

Checking for duplicates

If there is one thing that I love about Perl, it is that there is always something new to learn. In my case, I like it to be a few things every day, but that is just me.

In my last post, I mentioned about one liners and that I was working with some code that was rather puzzling to figure out. Well, I figured it out and with the help of Learning Perl, 3rd Edition. I have said it many times before and I will say it again. As much as the Camel book is famed as the "Bible of Perl", I tend to keep the Learning Perl book much closer to my keyboard.

There one liner that I was working on figuring out was as follows:

perl -e '$count=0; while (<>) {if (! ($var{$_}++)) {print $_; $count++;}} warn "\n\nRead $. lines.\nTook union and removed duplicates, yielding $count lines.\n"' ./file1 ./file2.txt > ./combined.txt

This code is supposed to take in the two files (file1 and file2) and combine them into one file (combined.txt), all the while, removing any duplicate entries. What puzzled me was HOW IS IT DOING IT? Yes, if you are wondering, it does work. Any Perl guru's out there are already nodding their heads as they probably already know how.

The magic of this code is in the "$var{$_}++". What happens is this, the code takes in the first file and reads it line by line. It then takes each line in turn and creates a key in the hash with it, but it is UNDEF as there is no value assigned. This ends up being a true test. The next line is read in and again it creates a hash key with the line as the key, only this time, if the key already exists, then the test is false as it is already existing and undef, so, the line will not be added to the output file. Its a little confusing, I know, but it works and it is how it was designed to work. Personally, its a great, short system for removing duplicates.

If you still have questions, I recommend you look at the example on page 153 of Learning Perl, 3rd Edition. Yes, I know they are up to 4th Edition, but I have my 3rd edition copy with me at the moment.

Happy Coding!!
Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.