Monday, October 06, 2014

Problem Found With Autoenv

When I first posted about virtualenv a few months ago, I was in the middle of learning to use and configure it.  Since then, I have created a script to create my basic project structure (including creating bitbucket and/or github projects) that also initialized git for you.  All in all, I wanted to be able to run the script and have it setup everything so that I could just start my coding.

As I was using my script though, it did what I created it to, but unfortunately I started to notice that after it was run, my environment would "activate" (quotes used on purpose) when I changed to the project directory, but not really.

Early on I gravitated towards using autoenv to auto-activate my environment when I simply cd'd into the project directory structure.  While this is a really cool tool, it is supposed to source $project_dir/bin/activate to activate the environment.  Unfortunately though, when I do a "which python" after changing to a project's directory, the system python path is provided when it should be referencing the python in the projects environment.

In the testing that I have done, this seems to be a bug with autoenv.  Thankfully though, its not the type of bug that is a project killer, just an annoyance.  For the time being, I plan on adding a sourcing of the activate file to the .env file in each project, as doing so seems to be a valid workaround.  I am also researching this to see if anyone has reported it to the developer.  If not, I will open an issue with them.

One note though, is that once activated via my work around above, you will need to deactivate the environment when you are done and exit the projects directory structure.  This can be done by running the 'deactivate' command.

Thursday, October 02, 2014

Passwords In Your Bash History

This is one of those things that is utterly annoying because you can only get mad at yourself.  At work, I have my ssh key populated on a lot of our systems, but there are also plenty of systems where it is not residing as of yet.  When I really get working on something, and working in a quicker than usual manner, there will be times I ssh to a system and don't realize that it "just connected".  The problem is that my fingers are acting way ahead of what I am seeing and have already typed my password and hit enter. 

Well, the problem is that your going to get an error because you password is not typically a command.  Its at that point that you will realize that you messed up and that password is now residing in your bash history. 

Initial searches will show that you can simply edit your history file and remove the offending line(s).  Sure, but I don't want to have to go through all that if I don't have to.  So, digging further you will find that you can simply do this (assuming that you did not type anything else after typing your password and hitting enter):

    history -d $((HISTCMD-1))

This works because the above command adds a line to the history file and you want the previous entry, which is your beloved password.  It will promptly remove it from the history, saving you the worry of having it in there.  The HISTCMD returns the latest command number in the history (reflecting the command you are entering currently to display it).

Now, if you continued typing in commands and figured you would get back to removing it, there is another way you can do this.  First, type 'history' and hit enter.   If your like me, you have your history sett to be thousands of lines, so you may want to pipe it through less or more to halt the scrolling off of your screen, like so:

    history | less

The output will look similar to the following (just more lines):

    757  ls
    758  ls -lart
    759  cat .bashrc
    760  exit
    761  clear

The line number is to the left and the command that was entered is on the right.

Find the line number with your password and issue the following:

    history -d

replacing with the offending number from the history output.

Another example is, say you want to remove all occurrences of a command from your history (and this could work for your password as well).  Simply run the following:

    history | grep "clear" | awk '{print $1}'

That will output the line numbers in your history.  You then just type them each into a 'history -d' to remove them.  Yes, that's tedious, but that is where shell scripting comes in really handy.  Write a script to do it for you.  For instance the shell script could:

- take a single word command as input (to search for)
- find all occurrences of that command in the history and save the line numbers (to an array)
- cycle over the array and run each number through a 'history -d', removing it.

There are a bunch of possibilities.  Of course, you can also add a bit of configuration to your history, reducing occurrences.  In your .bashrc, you can set the HISTCONTROL variable with options that tell the history how to act. 

In my .bashrc you will find the line:

    export HISTCONTROL=ignoredups:ignorespace

What that does is tell history to not record a command that is already in the history, and to ignore a leading space.  That last one is VERY useful when you want to do something but not have any record of it in your history.  You simply hit a space before typing your command and that command will not end up in your history.

Hopefully this helps someone out there keep their passwords secret and not in their history file. 






Friday, September 05, 2014

You Should Really Enable Two Factor Authentication

To your standard user, when you access a website you create a login and password combination for verification of your id.  The login name (also referred to as a username) is typically either a unique name or an email address.

At first thought, one would think that creating a login and having a user set their password should be sufficient security, even considering that a lot of sites have a minimum set of standards for their passwords (ie:  Length, types of characters required, frequency of those characters, etc).  Unfortunately this is incredibly far from the truth.

As we all know, the number of logins and password combinations that the average user has created could be anywhere from a dozen or two, all the way up into what some consider an astronomical number (depending on the number of sites they have accounts on).  The problem most people run into is that they don't want to have to remember all of those passwords, so they use one common password across all of their sites.  While this is great for them, the problem is, is that once an intruder has your password on one site, they will save it for use elsewhere as a 'just-in-case'.

You can now see the inherent vulnerability in this.  Sorry for the digression, but I felt it necessary to get across my point for the need for another, added layer of security that is very difficult to circumvent.  Two Factor Authentication.

 To quickly sum up the gist of what Two Factor Auth is (referred to after this as TFA), it is an added layer of security on an account, typically by way of login verification.  This verification is usually in the form of either an sms (text message) with a code that the application will then prompt you for (and not continue the login unless the correct code is entered), or via an application produced verification code.  The later, application produced verification code, is used by things like Facebook, where the application on your phone will have an option for a security code.  It will display a code that has been synced with the Facebook servers.

What happens when you login is:
  • You log in to the application on your computer using your login and password
  • You are then prompted for the security code. 
  • You then grab your phone, open the application, click to the security code page in the app and enter the code in to the waiting box in the browser.
If all was entered correctly you are granted access to the application.  What's nice is that the potential intruders may have been able to steal your credentials from the Facebook servers (hypothetically of course), but they would not have your phone with the verification code.  So, even if they try, they will not be able to gain access to your account.

There are a number of sites today that offer TFA in order to protect their users, and the number grows every day.  There is even another page that list sites and the links to the page on each site to enable TFA.  Lifehacker also put up a page on TFA that gives a nice overview of how some of the bigger names have implemented it.  I just hope that those that do not currently offer it are smart enough to know of the benefits and actually implement it before it is too late. 

If you do not yet use TFA, please look into doing so.  While it does add a little bit of action on your part to implement it and use it, that little bit of time is well worth not having your personal and private information stolen and sold.  It is all of our responsibilities to make sure we protect ourselves, not assume that someone is going to do it for us. 


Wednesday, September 03, 2014

Remove Offending Host Key From known_hosts File

If you are managing a whole mess of servers, you may have occasions where the host key associated with a host or hosts, changes.  This is typically due to re-installation.  None the less, when you attempt to ssh to a host for whom you previously had an entry for in your ~/.ssh/known_hosts file, you will see a message similar to the following

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
Please contact your system administrator.
Add correct host key in /home/user/.ssh/known_hosts to get rid of this message.
Offending key in /home/user/.ssh/known_hosts:811

RSA host key for has changed and you have requested strict checking.


Now, if I were accessing something over the internet via ssh and not on my corporate network, I would definitely need to be suspicious of this message.  You don't want to take chances with your security so always be sure.  But, if you are on your own corporate network and get this, check with your System Administrators, but the machine might have been re-installed.

So, looking at the above output, you will see a lot of information.  Most of it doesn't matter.  What does mater for the sake of removing the key in question is the line that reads:

  Offending key in /home/user/.ssh/known_hosts:811

That line tells you exactly what line in the known_hosts file contains the entry you want to remove.  So, whether you are on Mac, Linux or Unix, this should work just the same. What you want to do is grab that number after the colon above, and run the following command:

  sed -i "811 d" /home/user/.ssh/known_hosts

The -i tells sed to run in interactive mode.  Inside the double quotes you have the line number (grabbed from the output) and then a d (which stands for delete the entry, which it will at the line number you provide).  The only other thing on there is the full path to the known_hosts file.  If your not sure of where it is, it was on the line above the offending key line, in the above output.

Now, you could easily put this into a quick bash script that takes 1 field of input (the line number) and then calls the command as shown.  Either way, I hoped this helps with this common problem.


**Update: Thanks to a comment below from Attila-Mihaly Balazs, for letting me know that you can also use:

  ssh-keygen -f "/home/user/.ssh/known_hosts -R offending-hosts-name

That will remove the entry for the offending host who's key has gone stale, with the added benefit of a back up of the known_hosts file, saved with a .old extension, just before the entries removal.  

If you find you do not need the backup, you can simply delete it.

Monday, August 18, 2014

git Error: Unable to find remote helper for https

At my work, my group is (currently) pretty rooted in svn for all of our repository work.  The rest of the company has migrated to using git, but because we are responsible for all of the companies systems, time to switch is not exactly a commodity.

I do use git myself and like it, but when you use it on your own system, you don't always experience all the issues you could (especially since you probably performed all of the necessary pre-installation steps).

On my work desktop I was recently trying to clone a repository using https and received the following error:

  Cloning into gitflow...
  fatal: Unable to find remote helper for 'https'

I was momentarily puzzled by this, knowing I had seen it before, but couldn't remembered what the issue was.  I remembered that git uses curl for cloning via https so I quickly checked and found it already installed.  So, turn to the Google's.  That quick search enlightened me to the fact that the curl-devel package was not installed.

Explanation:  While it says that git uses curl for its https cloning, its actually the development libraries that it makes use of.

So, I needed to do the following since I didn't have them on their when I built git:
  • Install the curl-devel package
  • Find the source code I downloaded and built git from and do the following:
    • ./configure
    • make
    • sudo make install
After doing all of that, git successfully downloads the repository using https.  Do know that the make does take several minutes to run as there is a lot to build.  

Wednesday, July 30, 2014

Python Virtualenv Setup: Updated

I used the script I created to start a new project and realized very quickly that something wasn't right.  I had noticed that the autoenv wasn't working, while it had previously.  I checked other projects I used it with and it worked fine, so I started digging through the code.

Then I saw it.  With all the other projects, there was either a github or bitbucket repository created.  For this project I opted for neither.  Unfortunately, I completed forgot that even if I don't start a project, I need to still initialize the project as a virtualenv.

So, I decided to functionalize the github and bitbucket portions and moved the final step outside of the code that creates the repo's.  I tested it and voila!!!  Worked like a champ.

So, if you downloaded previously, please update your checked out version so you have the latest copy.


Wednesday, July 09, 2014

Getting started with Ansible

There are a ton of buzzwords thrown around every single day.  So many that its sometimes hard to determine what is legit and what isn't.  One of the words that I have paid attention to and come to really love is ansible.

To quickly sum up what ansible is, it's an automation tool that allows you to run a command or a set of commands across multiple machines.  This is extremely handy, for instance, if you have a bunch of machines acting as web servers and you need to shut down the web server portion for maintenance.

Previously you would have had to log in to each machine and issue the necessary commands.  But with ansible, you can simply put the commands in one file (called a playbook) and then run that playbook again the machines in question.

I would cover how to install ansible, but considering people work on different systems, I will just say that ansible has a pretty good set of docs on this already.  After you have installed the software, you will need a directory structure that looks something like the following:

ansible
    |
    |___ playbooks/
    |             |_____  .yml
    |
    |___ .ansible_hosts
    |
    |___ ansible.cfg

   You don't have to call the top directory ansible, but it helps so that you remember what is in there.  The playbooks directory is needed and under that is where you store the playbooks, which are in yaml format (thus the .yml extension).

The ansible.cfg file has a lot of options and you are going to want to read up on how to configure that.  As for the .ansible_hosts file, this is where you list the hosts that are under your conrtol and that you want to act upon.  In there you can list a single host or a group of hosts.  You can ready about how to specify your hosts in the ansible sites Inventory documentation.

The way that I have one of mine configured is so that it prompts me for the sudo password so that it can run all the commands that it needs to, with sudo.  As an example, I have a playbook called df.yml that will do a df on the set of specified hosts.  The df.yml file looks like this:

---
- hosts: "{{ group }}"
  gather_facts: false
  tasks:
    - name: df
      sudo: yes
      command: 'df -h'

Please keep in mind that this is a yaml file and the format above is specific to yaml.  If you look at the hosts line, there are no hosts specified.  Instead, it simply says {{ group }}.  This is a variable that will be expected from the command line when I run the playbook.   In my .ansible_hosts file, I have a section that looks like this:

[ hostgroupname ]
host1
host2
host3

To run the df.yml play on that group, I run it as follows:
ansible-playbook playbooks/df.yml -v -i ./.ansible.hosts -k --extra-vars "group=hostgroupname"
That is being run from the ansible directory and referencing everything with that being the root.  Notice the '--extra-vars' option and the 'group=hostgroupname' at the end.  That is where the  {{ group }} is pulled from.  Ansible will take that and run the commands in the file on each host in that group.

There is a lot more configuration that can (and will) be done for ansible.  Particularly, I will be setting up my ssh key on all the servers, that way the script just runs, other than prompting for the sudo password.  (Yes, my script currently prompts me for my password to connect via ssh, but that is my current configuration and destined to change when I find the two minutes to do it).

So, that is a slight intro to ansible.  Remember to bookmark the ansible documentation.   Hopefully you are able to get it working quickly and enjoy its use as I am.

Tuesday, July 08, 2014

Homebrew and the Untracked Working Tree Files

On the Mac, you have package managers that are designed to allow you to install software via the command line.  Two of the majors are MacPorts and Homebrew.  This is specific to Homebrew as that is the one that I chose to use for my system.

If you are like me, you install the software you need and then don't touch Homebrew for a while.  During the lapse in use, the system tends to get stale and you will find that before you search for software or are able to install anything, that you must  first update it.

The update is usually simple and run as follows:
sudo brew update
Unfortunately, its not a perfect system, and will let you know this by pretty much bitching and complaining and exiting, refusing to update.  The output of such a hissy fit looks similar to this:

> sudo brew update --force
error: The following untracked working tree files would be overwritten by checkout:
Library/Aliases/fishfish
Library/Aliases/fluidsynth
Library/Aliases/gearmand
Library/Aliases/git-tig
Library/Aliases/gnu-scientific-library
Library/Aliases/google-go
Library/Aliases/gperftools
Library/Aliases/gpg
Library/Aliases/gpg2
Library/Contributions/brew_bash_completion.sh
Library/Contributions/brew_fish_completion.fish
Library/Contributions/brew_zsh_completion.zsh
Library/Contributions/cmd/brew-aspell-dictionaries.rb
Library/Contributions/cmd/brew-beer.rb
Library/Contributions/cmd/brew-bundle.rb
Library/Contributions/cmd/brew-cleanup-installed
Library/Contributions/cmd/brew-dirty.rb
Library/Contributions/cmd/brew-gist-logs.rb
Library/Contributions/cmd/brew-graph
Library/Contributions/cmd/brew-grep
Library/Contributions/cmd/brew-leaves.rb
Library/Contributions/cmd/brew-ls-taps.rb
Library/Contributions/cmd/brew-man
Library/Contributions/cmd/brew-mirror-check.rb
Library/Contributions/cmd/brew-profile.rb
Library/Contributions/cmd/brew-pull.rb
Library/Contributions/cmd/brew-readall.rb
Library/Contributions/cmd/brew-server
Library/Contributions/cmd/brew-services.rb
Library/Contributions/cmd/brew-switch.rb
Library/Contributions/cmd/brew-tap-readme.rb
Library/Contributions/cmd/brew-test-bot.rb
Library/Contributions/cmd/brew-tests.rb
Library/Contributions/cmd/brew-unpack.rb
Library/Contributions/cmd/brew-which.rb
Library/Contributions/cmd/git
Library/Contributions/cmd/public/bootstrap.min.css
Library/Contributions/cmd/public/glyphicons-halflings-white.png
Library/Contributions/cmd/public/glyphicons-halflings.png
Library/Contributions/cmd/svn
Library/Contributions/example-formula.rb
Library/Cont

No matter what you do with the brew command (at least from what I have found), you won't be able to update.  What is nice though, is that brew uses git to do its checkouts and updates.  And, for your knowledge going forward, brew uses /usr/local as its base directory.  So, the easiest way that I found to correct this issue and update brew is to completely bypass the command and go straight to git.  

The following set of commands should work to get your brew back up and working(note, I am sudo running these commands):
# cd /usr/local
# git fetch origin
# git reset --hard origin/master
After doing the above, you should find that running brew update has the following results:

# brew update
Already up-to-date. 
Why?  Because you just circumvented the brew command and updated it manually.  You can now run 'brew search ' or whatever other brew command you were going to run.  Have fun!

Thursday, May 22, 2014

Update: Python Virtualenv Setup Project

I have been doing a bit of work the last day or so on my script for setting up a python development environment called "Python Virtualenv Setup".  When I first mentioned the project on here several posts ago, I mentioned that I was planning on support for auto-creating a bitbucket repository during the script's run.

Well, I not only added support for creating a bitbucket repository, but I added support for github as well.  So, the two popular and competing services are now supported.  I even went so far that if you create one of the remote repos, the script also does a 'git init' in the project directory for you and also provides you the command for setting your origin for pushing your code to your new, remote repository.

I feel like Tim Robbins in Antitrust, "God, I love this stuff!".

Thursday, May 15, 2014

How To Recursively Download A Website / Web Directory

On the web you can find plenty of open directories to browse, or directories where a company keeps its files for download.  Using Google, you can use its built in advanced search functionality to tell it what you want to find.  For instance, to find open directories that have jpg and gif images, you could use:

     -inurl:(htm|html|php) intitle:”index of” +”last modified” +”parent directory” +description +size +(jpg|gif)

If you are interested in what you can do with your Google search, check out Google's Search Operators page.  It is certainly enlightening for those wanting to go beyond the basic search.

The problem with the site in the browser is that you need to click on each file to download it.  You could even use a coding language to write a script to go to the directory and download all files of a specific type.  Or, if you have the wget application on your computer, you can download that directory listing without having to click on every file.  wget has a number of different options to do its job, and you should spend a little time reading its associated manpage in order to familiarize yourself with them.
For our purposes, we are going to use wget with the following options:

     wget -r --no-parent

The '-r' tells wget to retrieve files recursively.  There is a '-l' or '--level' option to specify how many levels to max out on.  The default is 5 levels.  We are sticking with the max of 5.

The --no-parent option tells wget that it should not follow the parent directory link ( '..' ) in the directory.  That way it limits the recursion to just the directory structure you are in, making where you started it (better, where you specified to start in the url), the top level.The is the url to the directory where you want to download from.  

Now, wget, by default, obeys robots.txt.  That said, website owners can control where you can and cannot go, among other things.  Of course you can also turn off the obeying of robots.txt by using the '-e robots=off' option.   Here is an example of the above command, disobeying robots.txt:

     wget -r --no-parent -e robots=off 

Please know that using this option is rude and could possibly get you (and/or your IP) banned from accessing a site, should the site owner find out.  Especially if you overload their site with your downloading.  Regardless of your intentions, remember to keep in mind that those who can't see you or don't know you, have no idea if your a good guy or a bad guy.

wget is quite flexible and useful.  Read the manpage and it will become one of the many awesome tools in your arsenal.


Update:  One more thing.....

If you are attempting to download the content from an open directory structure and your download fails, you are probably going to want to continue the download from where you left off, without re-downloading.

You will want two other options for this, the '-nc' option and also '-t 0'.  The '-nc' means no-clobber.  Seeing this, wget will not try to download a newer copy of the file if a copy already exists locally.  No, this is not the option for updating, just for not re-downloading if you don't care about updates.

The '-t 0' option basically tells wget to wait, infinitely, for a download to become available, and keep re-requesting.  You may find that a site might shut you out or maybe get bogged down.  This will keep the requests going for the files.  You may find that if it hangs, you'll have to kill the process and restart it.

Monday, May 12, 2014

A Couple Of virtualenv Notes

A little while ago I posted about starting a python project, and posted a link to a bitbucket project I posted that can be used to create a virtual environment (and have the environment auto activate upon entering the project directory).  I wanted to post just a couple of things that I learned, one of which bit me a touch when working on a recent python script.

I was working on said script and kept getting erroneous errors.  The errors were not making sense, telling me that the module I was working with was not installed.  I called 'bull' as I know I had installed it in the environment using pip.  Well, I thought I had.  It bit me in that I did not remember that when you are working in a python virtual environment, you NEED to use the pip and python executables installed under the environment (in the environment's bin directory).  If you don't, and just call pip or python, it will reference the system version by default.

This was brought even more to my attention when I ran 'pip freeze' to see what modules I had installed, and the list was looooooong.  I quickly realized that I was not using what I thought I was.  So, always make sure you reference the correct application.   Of course, one gotcha to remember is that once you develop your application, if you move it out of the virtual environment, you need to fix any path's that point to the python interpreter.

Consider that a lesson learned for me.  Hopefully it will allow you to forego learning it too painfully (or just annoyingly).

Wednesday, April 16, 2014

Add Services In Linux Using chkconfig

As a SysAdmin, you are responsible for everything to do with the systems under your control.  These responsibilities range from system installations to all kinds of maintenance and updates.  You may find, now and again, that you want to have something start up when the machine boots, and the best way to do that is to have an /etc/init.d script to do that for you.  In these scripts, you can define start, stop, restart options, with the commands you want to happen for each option.

To begin, a really handy command that you should get to know is chkconfig.  This command will show you whether a service is set to be on or off in each runlevel.  If you simply type the command with no options, you will see output like the following:

     httpd         0:off   1:off   2:on    3:on    4:on    5:on    6:off

As you can see, the runlevel's are shown, with the corresponding on/off option shown for each.  In order to add a service, you need to first create the script to execute and put it into the /etc/init.d directory.  If you have software that you have unpacked that has its own script already written, you can simply create a link to that script in the /etc/init.d directory.  Either way, once you have done that you can add it as an active service by issuing:

     chkconfig --add

The is the script or link name that you just put/created in /etc/init.d.  After adding it, check that it was successfully added by doing:

     chkconfig | grep

You should see the above output if the service was successfully added.  If you need to change a service to be on or off in a specific runlevel, then the format of the command is:

     chkconfig --level 345 httpd off

The numbers after the '--level' are the runlevels to modify.  As you can see, you can list whichever levels you want, but no spaces, commas, or anything else. After that is the service to modify, followed by whether it is off or on in the listed runlevels.

To remove a service, simply use:

     chkconfig --del

Again, check that it was removed per the above command.

The chkconfig command does have other options available to it, but this should give you a basic overview of how to use it.  If you wish to read further, please feel free to read the man page.

It is important to note that chkconfig does not exist on all systems and is typical on Red Hat based systems.  If you are on, say, Debian based machines (such as Ubuntu), then you will need to use 'update-rc.d'.

We will save that for another post.....

Removing Files Older Than So Many Days In Linux

On our own home systems, we tend not to run into the issue of files from this product or that product, building up and eating your disk space.  But, when you are dealing with servers and the software that people run on them, preserving space by deleting unnecessary logs and other files, is a necessary skill.

Just as an example, we use Puppet where I work to manage system configurations.  While the puppet logs tend to take up a bit of space on our puppet server after a while, its the puppet reports that end up eating the most space.  

The puppet reports are located in /var/lib/puppet/reports.  Under that directory is/are (potentially) a whole slew of directories, one for each machine that puppetizes off of that master.  In each of those directories are *.yaml files.  A yaml file is created each time puppet runs on a machine and connects to the puppet master.  

So what is the first step in purging the files?  Well, lets start by seeing how many files we are actually talking about.  To do this, you can use the find command:

   find /var/lib/puppet/reports *.yaml | wc -l

What that command does is search the reports directory for all yaml files.  It then reports the total cound of all files found.  Next, lets see how many files we are looking at getting rid of.  Let's say that we are going to keep the last 14 days of files.  For that we would simply modify the above command to be:

   find /var/lib/puppet/reports *.yaml -mtime +14 | wc -l

Again, it will report the total.  You will notice that the number is smaller than the previously reported number.  Now, if you are ready to remove those file, a simple modification will do that for you:

   find /var/lib/puppet/reports *.yaml -mtime +14 -exec rm {} \;

You have to just love the power of the command line in unix.  With just a few keystrokes, you can purge the unneeded files with a single command and a few options.  

Sunday, April 13, 2014

Free Programming Books and Links

There is no question, Free is just one of those words in the English language that when see, people take notice.

I stumbled across a link to some Free programming books and links that I feel just needs sharing.  Now before you get too excited, the links on that page either take you to either an online version of a book, or a link to a page with a tutorial or article.

To be honest, I have found a bunch of good information in here since finding it and hope that you do as well.

Tuesday, April 08, 2014

NEW OpenSSL Vulnerability

For those who haven't heard, there is a new OpenSSL vulnerability that was found, dubbed Heartbleed.
If you haven't done any patching yet, you'll want to if you have an effected version of OpenSSL installed on your system(s).

You can test your sites with this software, released today.

To check your systems to see which version of openssl is installed, simply run 'openssl version' and check what it reports.  Versions 1.0.0 and 0.9.8 are NOT effected, but if you are at version 1.0.1 or above, you will need to patch to version 1.0.1g (the newest, released version to fix the issue).

If you are using Amazon AWS, here is how you can update your instances.  Also, Amazon has launched a new AMI that contains the fix as well.

Just a note about the Amazon instructions, you'll need to use the following command to unpack the tarball:

     tar -xvf

The article incorrectly states a command that simply hangs and the above will extract correcty.

NOTE:  Since the writing of this post, the article has been updated to include the 'f' option.

If you are using openvpn, then you may find the application was pre-compiled with openssl 1.0.1e or another effected version, making it a static build.  I heard that OpenVPN is supposed to be releasing an update that uses 1.0.1g.

UPDATE:  Here is a link to a reddit post that provides further information on the bug.

Friday, April 04, 2014

Getting Back To Some Linux Basics

Yogi Berra is famous for many '-isms' which make people laugh.  One of those '-isms' that I like is:

       "Life is a learning experience, only if you learn"

One of the things I love about my job is that I learn something new every day.  While I have learned a lot over the years (thus far), I also am humble enough to know that there are so many things that I don't know.  

In the spirit of learning, l wanted to share with everyone a link to some Linux basic commands.  While Linux has a friendly GUI front end to it which makes it easy for someone new to the operating system to get around and get used to it, the true power of the operating system lies in the command line.  Some would say its a dying art, but I refuse to believe that.  There is so much you can do on the command line, faster than you can through a gui, that it will never truly go out of style.  

Here is a link to a nice set of Linux commands that not only give a quick overview, but they also give examples of the commands being used as well.  What was even nicer about the folks over on that site, they even created a pdf version of the page.  Please keep in mind though that the pdf version does not contain all of the examples that are linked to in the main page.  

That page (and accompanying pdf file) are only a small subset of the commands available in Linux.  If you are looking for a more complete reference, here is one that goes over the commands that are part of the Bash shell (one of the more popular shell environments in Linux).

And, last but certainly not least, for those of you who are anal enough (like me) to want a complete reference, here is a link to an online version of the Linux Complete Command Reference (pdf version).

Even if you don't make Linux part of your career, its beneficial for you to check it out.  Not only is it far more secure than Windows and none of the virus' effecting Windows based systems can effect Linux, but its also FREEEE!!!!!   I know, as my Dad asked me before I switched him over, "Where will I get support?"  My answer to him was "Me!".  To you, my answer is "Google!".  Its the best support reference I can give other than knowing someone who knows it.  

Enjoy!

Saturday, March 29, 2014

Starting A Python Project And Enabling Git For Source Control

Sorry for the long title, but it encompassed what the article is about.

I have been doing a lot with Python in order to better learn the language.  As part of my learning, I read about the virtues of virtualenv.  For those who aren't aware, virtualenv is this sweet toy that makes a project directory an environment, and as such, puts an installation of Python into that environment.  You can install modules and such using the commands under that environment and they won't effect the python installation that is on your computer itself.  Its even better because if you screw up and want to start all over, its a matter of simply deleting the directory and starting over.  

But, to do so, you need to:

- create the directory
- run the virtualenv command in the directory
- activate the environment each time you go into the directory and want to work
- deactivate the environment when you want to leave the directory and stop working on the project.

For the last two pieces, there is a nice bit of automation that exists called autoenv.  Its easy to install and cake to setup.  What makes it so excellent is that when setup properly, cd'ing into the project directory activates the environment and cd'ing out of the project directory deactivates it.  Its flippin' sweet! 

Even though that level of automation exists for the {de}activation, I thought it would be nice to have a bit of automation for simply starting a project.  Something that sets up the project directory and even creates the necessary bits in the directory for autoenv to work.  So, I sat down and did it with a little bit of bash.  

You view the project, called Python Virtualenv Setup and even download the code.  I have been doing some refining on the script, so if you decide to play with it, please check back now and again for any updates.  I opened an enhancement ticket under issues as the script currently assumes a specific project directory structure already exists.  I am going to make it a touch more dynamic and have it check for that directory and ask for one if it doesn't exist.  

If you decide to play with the script and find any issues/errors or know of some enhancements, then please feel free to open a ticket under the issues tab.  If its an enhancement request, I will definitely take it under advisement, but please know that the script, as it is written, does what it was designed to.  I haven't put any thought into further expansion of its duties, but its also not out of the realm of possibility.

As the title suggested, I have been using git for my source control for my projects.  As you can also tell from the project link above, I am using bitbucket as well for remote hosting of the code.  I know, you probably asking "Why not Github?".  Well, I did a bit of homework on this, as you can well imagine.  I do have an account on Github and follow a number of projects.  I even have a couple of things I have posted up there.  But in my review of both Github and Bitbucket (among others), Bitbucket was the only one that allowed not only unlimited storage, but also unlimted public AND private repositories, all while under the free account.  That was quite attractive from a money conscious mind, I have to say.

Ok, back to what I was actually getting at, and that was that I wanted to cover a bit of the quick basics of how to start a project in git, especially for those who are just starting out with it.

Like starting any project, you want to make sure you at least have you project directory setup with at least a "README.md" file. Both git and bitbucket will read this file and use it as your project's page.  I would make sure that you put in there all about the project, including things like what its about, how to use it and even any examples and insight.   The people who download your code will be relying on it for answers.  Don't forget to put installation instructions, no matter how rudimentary you think that might be.  (Thinking about that, I should do that for the above project.)

After you have the directory ready, you are going to go into the directory and issue this command:

    git init

That will initialize the project with git and enable source control.   The next thing to do is to add all the files(or file) that you have created.  You can do that with the following:

    git add .

Once you are ready, you will want to do your initial commit and get your files under source control.  Usually when commiting files, you list everything you changed or added.  That way you can look through the log and find the revision you need.  For the initial commit, you can comment just that:

    git commit -m "Initial commit message"

The -m allows you to add your comments in double (or single) quotes.  Listing no files after the closing double quotes will commit all files that are pending.   If you are at all unsure about what you have touched and want a quick recap, use:

    git status.

That will print out what is pending and other useful information.

Ok, at this point, you have initialized your repo, added files and checked them into source control.  What you may not realize though, is that all of this has taken part on your local machine and is not yet pushed to any remote servers.  Why?  That's how git works.  To get the files to a remote server, you have to tell it where to go and then you will need to push it.

As said before, I am using bitbucket.  So, this example is for that site.  This is how you would tell git that you want this specific project to be pushed to bitbucket:

    git remote add origin git@bitbucket.org:/.git

Note:  In order to get this information, you will have to have already had the project created on bitbucket.  Believe it or not, bit bucket will give you the actual above command so you know what to specify.  They are nice like that.

Once you have the project defined and the origin added, you should then be able to do the following to push the project up to bitbucket:

    git push -u origin --all

If everything was already setup correctly on bitbucket, then that should work just fine for an initial push and all pushes thereafter for the project would simply be a matter of issuing a "git push" in the project directory.  If this is the first time you are using your keys, or if there are problems, then you might see something like this:

    Permission denied (publickey).
    fatal: Could not read from remote repository.

This simply means that there is an issue with your connecting to bitbucket.  It could be any number of things.  First thing is to check what identify your passing.  You can check that with:

    ssh-add -l

If that returns nothing, then you aren't passing anything and that isn't good.  Since we are using ssh to push the files up to bitbucket, you are definitely going to have to make sure that you have already added your ssh public key to your account in bitbucket.  If you haven't, then do so.

After that, you'll have to do the following in the project directory:

    ssh -i ~/.ssh/ssh_keyname -T git@bitbucket.org

If you have done everything you need to, then you should see output that looks like this:

    Identity added: /Users/xxxxxx/.ssh/ssh_keyname (/Users/xxxxxx/.ssh/ssh_keyname)
    logged in as xxxxxxxx.

Obviously identities have been changed to protect the innocent, but you get the idea.  You can now check if you have an identity added to ssh with which to connect with:

    ssh-add -l

That should output your key if it was added correctly.  You should now be able to go ahead and issue the initial push command above to add all your files to bitbucket.org.  If you are still having issues, I would certainly suggest you take any errors you are getting and plug them into Google and see what comes up for results.  

Friday, March 21, 2014

Enable vi mode editing in python, irb (ruby) and others

Everyone who uses Unix/Linux, will at one point or another, choose their editor of choice.  It is either out of necessity or, as in my case, all of the people using the system(s) around you used the same editor. 

There are several different editors, but the two most popular are emacs and vi.  I have recently started learning emacs, as I am in a group where most already use emacs.  No, I am not being pressured, I am just actually getting to experience some of the cool aspects of the editor other than someone standing around ranting that "emacs is cool!" over and over. 

Overall though, I am still very much a vi person.... no question.  If I do make the switch to emacs at some point, it will be wholely.  But for the mean time, until my comfort level is up, I am sticking with what I know best. 

That said, when I am in things like the python shell, I think having the ability to have a 'vi mode', is something beneficial to me.  I stumbled on a quick setup with will allow you to have the 'vi mode' in most, if not all, of your different shells (python, irb, etc).

What you need to do is create in your home directory, a file called .inputrc.  In the file, put the following lines:

    set -o vi
    set editing-mode vi

You don't need to source it, just simply start up something like irb or the python shell and then, when in there, hit the ESC key and you'll be in vi mode.  Nice, huh?  Enjoy!

Thursday, March 20, 2014

iOS 7.1 Problems & Fixes

It's that time again, time to update your iPhone to the newest version of the iOS software (Per Apple's announcement). 


As with any iOS update, there are certainly problems that people are having.  Thankfully, for some of the most common problems, people like ZDNet have compiled a list of issues with possible solutions.

Looking over the list, I would like to mention a couple of issues that a colleague experienced that are NOT on the list.

1.  90% of my colleagues contacts simply disappeared.

This issue SUCKS!!  You take the time to add the contacts you need on your phone, only to have them simply go away with an update.  Well, I believe I have the solution to this after my wife's iPhone decided to remove hers for no reason what so ever.  Sync them with iCloud.  If you sync all of your contacts with iCloud, you should be able to easily re-sync with the cloud and have your contacts back on your phone rather quickly. 

2.  His corporate Good email app stopped working.

The company we work for uses Good for secure, corporate email on mobile devices.  I am always hearing about the app stopping working after an update, so it came as no surprise to me.  If this happens, you'll have to re-install and get a new code from your company's Administrator. 

Hopefully this list of problems and solutions will help to lighten the blow if you experience them. 

Friday, February 21, 2014

Alternative HackerNews formatting

I don't know about the rest of you, but I am an avid HackerNews reader.  For geeks, I find it to be one of the best centralized sites for news from around the internet.  The stories are user submitted and I always find plenty of stuff to read and learn from there. 

One thing I found early on was that I really didn't care so much for the default style of HackerNews, but I put up with it for a while.  That is, until I found Georgify, a Chrome extension that pretties up the interface, and does it nicely IMHO. 

I tend to switch between Chrome and FireFox, depending on which machine I am on (work or home) and on Firefox, the Georgify extension doesn't exist by itself, but as with many things, there is a work around. 

First, install the "Stylish" extension, and then go over to userstyles.org, search for Georgify.  You will see a bunch of items on the search page.  Pick the one you like and install it.   FYI.... on my machine, I had to authorize the extension for a popup for installation. 

I don't know about everyone else, but I have to say that I like the customized extensions to change the way that HN looks. 

Wednesday, January 29, 2014

Understanding The PXE Boot Process

At my job, like most places, the servers we use are located in rooms or buildings other than where we are located.  Its not always feasable to be "AT" the server to do an install.  In fact, in our new environment, it is definitely not something that could happen without wasting the time to get down to the data center.

As those who have worked with HP rack mounted machines, they have an iLO (integrated Lights Out) interface to allow you to get to the machine remotely.  Its pretty nice to have this ability, I have to say, but using it does take a bit of learning the interface (which means keeping a reference list of commands is truly handy).

When we start up a machine for the first time, the only thing that the machine knows about itself is its MAC address, so when we turn it on, it broadcasts for a DHCP address.  Our DNS server picks up the request, matches the MAC to an entry in its tables and throws the assigned IP at the machine.

I know this article is about the PXE boot process, but I wanted to at least back up a touch and start at the beginning of our process.   The reason I did this is because of the MAC address.  Its actually quite important in the PXE boot process.  PXE is actually an acronym that stands for Preboot Execution Environment.  While there are a number of ways to network boot, this article only covers this method (as this is what we are using).  The reason I am covering this is because when you are in an environment that uses PXE booting along with things like kickstart, cobbler and puppet to get machines up, running and installed (as we do), then understanding each part of the process is critical for troubleshooting any issues that arise. 

In order to do PXE booting, you need a NIC that is PXE capable.  The reason for this is that PXE makes the NIC act as a boot device.  When you turn on the machine (whether sitting at the machine or doing it remotely, you need to watch the boot process and at the appropriate time, select to network boot.  Depending on what type of machine your on, your options will probably different so I am not going to say what key sequence you need to hit.  Simply watch the screen for the prompts.  If you miss it, reboot and try again. 

When you select network boot, the machine (actually the NIC) sends out a request to the DHCP server.  The DHCP server responds, returning an IP address for the client (that should be pre-setup) and also the address of the TFTP server and the location of the boot files on the server and the boot image (called pxelinux.0).

The machine then contacts the TFTP server, retrieves the pxelinux.0 image and executes it.  The boot image searches in the pxelinux.cfg directory on the server for boot config files that match the machine that is trying to boot.  There is an order to which the search for that file is performed:

- The first thing searched for is a file named with the MAC address of the machine.  The filename uses a '-' in place of the ':' that usually makes up a MAC address.
- If the file with the MAC is not found, then the search takes the IP of the machine, converts it to uppercase hex, which produces a string of 8 alpha-numeric characters.  The search then uses that hex string and searches for a file with that name.  If it doesn't find it, it then removes one character from the end and searches for a file with that new name.  It does this consecutively, each time removing a character from the hex string. 

Here is what the search process would look like:

/pxelinux.cfg/01-ff-aa-63-33-aa-dc
/pxelinux.cfg/D067A25B
/pxelinux.cfg/D067A25
/pxelinux.cfg/D067A2
/pxelinux.cfg/D067A
/pxelinux.cfg/D067
/pxelinux.cfg/D06
/pxelinux.cfg/D0
/pxelinux.cfg/D


If it does not find any profiles, then the boot process will then present a menu with a list of profiles (for us at least).  It should never actually get to that point.  If it does, then you need to check:

- that you have the MAC address correct in the DNS server.
- if you are using cobbler, then check that all machine information in cobbler is correct.

 I hope this allows you to better understand the PXE boot process.   I find it extremely useful. 







Monday, January 13, 2014

No Audio Via AirPlay From Macbook To Apple TV

For Christmas, my wife's work gave everyone Apple TV's.  I was excited to hook it up and play with it and in fact was quite impressed with it right off.  After playing for a bit I tried to play a movie I had on my Macbook Pro.  While the video was mirrored over to our TV via AirPlay, the audio seemed to stay with the laptop.  Talk about odd.

I tried a couple of things (disconnecting and reconnecting from AirPlay, rebooting, etc).  I then did some Googling and discovered a post on the Apple Support forums that said to run the following:

sudo killall coreaudiod

After I ran that, I was able to play the video I wanted to (via VLC Media player) AND hear the audio on the TV.  Woo Hoo!    Some reported that they had to reboot as well, but for me, I just had to run the above command before starting up VLC media player.

Best part of the forum post I found was that NOBODY from Apple Support posted the work around.  It was one of the users that had discovered it and posted it.

Thursday, January 09, 2014

Upgrade To Mac OSx 10.9.1 Breaks pip

As with any upgrade to an operating system, you need to make sure that the things you use do not break.  Unfortunately, the breaks that happen are not always immediately apparent.

I decided to start playing with virtualenv (for creating virtual environments in python).  I was following a tutorial guide on virtualenv, the first steps being to install virtualenv using pip, which I have had installed on the machine for quite some time.

I attempted to run the command to install virtualenv:

sudo pip install virtualenv
and upon doing so I was met with the following message:

Wheel installs require setuptools >= 0.8 for dist-info support.
pip's wheel support requires setuptools >= 0.8 for dist-info support.
This made my eyebrow cock a bit as I know that pip was working only a week ago when I installed a couple of modules using it.  I thought quick, trying to determine what might have changed and then I remembered the OSx upgrade that I just did.  Awesome, they broke pip.

I'll admit that I went a bit of a long way around on this one, mostly because I wanted to make sure that other things weren't broken.  I thought quick about what, other than pip, I used to install things.  There is easy_install and also brew.

I decided to try 'sudo easy_install --upgrade pip' to see if that would resolve my pip issues.  It kindly told me it was already the most current verstion.   Awesome.

I then ran a 'brew update'.  Formulas were updated, but nothing major.  I then ran 'brew doctor'.  This provoded a lot more output of things.  I have a couple of apps that needed updtating, but the thing that stood out is there was also a complaint in its output about setuptools.  The output suggested running 'xcode-select --install' to resolve the setuptools issue, so I ran it.  The run took a few minutes, but did complete successfully.  I tried to again run the install command for virtualenv and STILL received the same error.  More research was obviously needed.

I Googled the error and found a StackOverflow post with the error that exactly matched what I had.  Per the most upvoted reply, I ran the following:

sudo pip install setuptools --no-use-wheel --upgrade

Then, for giggles, I started up XCode.  This immediately informed me that that there were updates that were needing to be done and applied, so I opted for them.  Once that completed, which took about 10 minutes, I went back and ran the install command for virtualenv.  Success!!!  WOO HOO!!!

The lesson here, if you upgrade, be prepared for things to break.  I found that it was my development tools that took the hit this time.  I have tested a number of other applications that I have on this machine, but nothing else seems to have any issues.  Hopefully your experience is a bit smoother.

Sunday, January 05, 2014

Modifying The Python Module Search Path

After having done a fair amount of programming in Perl, I decided a while ago that while learning Python, I wanted to not only learn the details of the language, but also make sure that I try and do things in a manner that is both easy to re-figure out later, and is also not considered an awful practice by the community.

Currently, while working on a project, I have been researching adding a directory to the module search path in Python.  While reading the module documentation, it was mentioned that one of the easiest things to do is to just add a path to the PYTHONPATH system variable.  That would be a fantastically easy thing to do, except I logged into a few different systems where I have python installed, and not one of them had that variable set.  (Those systems included both Linux and Mac OSx operating systems for those curious).

To be honest, I didn't want to venture down the path of determining all of the path's that should be set in that variable as it could be many.  So, I started up the python interpreter, imported the sys module and then printed the path like so:

> python
Python 2.7.5 (default, Aug 25 2013, 00:04:04)
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> print sys.path

This printed out quite a long list of path's that python used to search for modules.  So, now I needed a way to add a directory to that list, so that I could place all of the home grown modules for my project in one place and reference it in the search path.

After thinking back to a few weeks ago, I remember a method that a colleague used in a script that he had written and after doing a little testing with some of that code, came up with the following method to do what I needed to:

#!/usr/bin/env python 
import os
import sys 
modulesDir  = "lib"
prog    = os.path.abspath(__file__)
cwd     = os.path.dirname(prog)
libDir = os.path.join(cwd,modulesDir)
sys.path.append(libDir)
For the above solution, you replace "lib' with whatever you called the directory.  This assumes that you have main directory where your primary script resides, and in there is a subdirectory holding your modules.  It nicely appends that directory to your search path.  I am not sure if its the prettiest or most acceptable way to accomplish this, but it does the job.  If someone knows of a better, more acceptable way, I would definitely be interested in listening to their explanation.




Saturday, January 04, 2014

Does that site have an RSS feed?

A while ago I gave our daughter my old DroidX phone and since then, she has increased her use of it.  She now has a free texting app that she uses with her friends (not that I am completely comfortable with it, but, baby steps), and today, discovered the news widget, which grabs article title and summary from rss feeds.  

Well, I found the sites for the tween/teen magazines she reads and after examining all the different areas of each site (having my IQ drop with each link click), I determined that I really needed a more definitive way to find the rss feed links for the site(s).

So, on a whim, I opened up the page source for the main page of each site and searched for "rss".  What do you know, if there is an rss feed, its typically mentioned in there.  Good stuff to know if you have tweens/teens and need to set this stuff up for them.  


Friday, January 03, 2014

URL For This Blog

A little while ago, I purchased a domain that I liked, and that goes along with the naming of my site (ParsedContent).  Although I had bought it, I unfortunately did not have the time to implement anything with the domain.  

So, about a month ago, while doing some work before bed, I got a bug up my preverbal butt and decided to set up the domain to point to this blog.  

So, without any more ado, the URL that you can use to get to this blog is http://www.parsed.co/.  Please note that the 'www' portion is necessary as it is required by Google when setting up a URL for your blog.  If you remove it, you will get a pretty, blank white page.

Thursday, January 02, 2014

Free Hosting (a.k.a. Yes, you can use Amazon AWS for free)

When I started at the job that I have now (and absolutely love), I got my first foray into the world of Amazon AWS.  Before starting here, I did not have any experience in the cloud, but cannot say that I was not absolutely intrigued by it.  Sure I had a Dropbox account and Bitcasa account, among others, but that's not the type of cloud experience I am referring to.

Amazon AWS (which stands for Amazon Web Services) is Amazon's cloud services platform.  There are an absolutely plethora of companies that use it on a daily basis, including Amazon, which uses it for its own store.

Sure, to use it there is quite a bit of a learning curve associated, but believe it or not, its not that difficult, it just requires the time to dedicate to learn it.  Amazon keeps quite an array of documentation for you to reference and learn from, and if you or your company have enough in the budget to support it, they have training available around the country.

But, believe it or not, one does not have to spend a cent in order to get started with Amazon AWS, so long as you stay within their guidelines, which are outlined on their Free Tier page.

You could use it to setup a website (even database backed) and have it up and running for a full year before you have to pay a cent to Amazon.  That is truer than you think.  Amazon gives you 750 free hours a month of free running instance time.  Which means, unless they decide to extend the number of days in the calendar for each month, you can leave your single instance running 24x7 without going over.

Amazon has quite an incredible cloud platform to work with, but be careful, because once you start paying, if you do not monitor your activity, the bill can grow exponentially.  But that's not to say it cannot be done, it is done every day by companies that use untold amounts of amazon's services.

I hope you not only enjoyed this article, but that you let curiosity be your guide and try the cloud.  Its quite fun.
 
Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.