Showing posts with label linux. Show all posts
Showing posts with label linux. Show all posts

Tuesday, February 07, 2023

Removing Unwanted Lines From A File

Intro

 This is a bit of a "back to basics" post.  I find it is good to revisit, now and then, as it keeps you sharp.  It also helps as a reminder when you find yourself needing to perform this task.  One never know when they need to remove lines from something like a log file. ( for say, hiding one's interactions on a server )

So, lets say you have a file ( we will call it logfile.txt) on a system, and you need to remove all the lines that contain the text foobar (I know, pretty standard, but you understand the usage).  I will preface this with the fact that this is being done on Linux.  And as with anything on Linux, there are a plethora of ways to do things.  So long as the end result is what you require, then the method was correct, even if there are quicker or more efficient ways.    These are just a couple of the quick ways that you could achieve this goal.  So without further ado, let's jump right in.

NOTE: Please keep in mind that all of these methods will achieve the same exact thing, just in different ways.



Method 1

So the first method is more my preferred method.  Why my preferred?  Well, because its a one-liner and it doesn't require me to do any moving of files to different names.  Its just quick and direct.  But also, you need to make sure you are absolutely positive that it is doing what you expect, as it is immediately effecting of your file and not reversible.

 

The Method:

$ sed -i '/sometext/d' logfile.txt

 In the above, the '-i' tells sed to edit the file in place (that's what makes this a more dangerous and immediately effecting version).  The /sometext/ is the text that you want to match on each line in the file.  (yes, the file will be read, line by line, matching against 'sometext' to check for a match).  The 'd' option says to delete that line if a match is found.  'logfile.txt' is the file to search in. 

This tends to work quick and is efficient.  It would help your situation to search the file, say with grep, first, and ensure of what you will be matching.  Caution is always a good thing to err on the side of, but that's just me. 


A Note Before Continuing:  The above method is the only immediate method I am presenting.  The other two methods I show, will involve the use of temporary files.  That said, they are the safer options, unless of course, you decide to script them, in which case, you make them more immediate.  Your choice.



Method 2

This method involves using grep and using the '-v' option, which will ignore the lines that match.  It will take the lines that do NOT match, and output them to the temporary file.  You will then take the temporary file, after grep runs over your file, and move it back to the original filename.  Or, you could take and name it something different, the option is yours.

 

The Method:

$ grep -v "sometext" logfile.txt > tempfile && mv tempfile logfile.txt

 

Again, you could move the tempfile to another filename.  Or, even better, after purging into the tempfile, rename the original file to a backup name, and then rename the tempfile to the original name.

 

 

Method 3

This is the last method I will cover here.  This involves using the awk utility.  Just as the others, it will do a match, but its method is to use a '!', which means DON'T match, which tells awk you want to match all lines that do not contain the matching text.  So any lines that do not have the 'sometext' matching text, will be output to the tempfile.  

 

The Method:

$ awk '!/sometext/' logfile.txt > tempfile && mv tempfile logfile.txt

 

And as noted previously, you can rename as you wish.  Either back to the original file name, or backing up the original name first, and then renaming.  Its totally up to you.  And also, totally scriptable.

 

Conclusion

I hope that this helps you manage this basic of tasks.  Enjoy!

 

 


Friday, January 13, 2023

Setting Up Your Own Remote Git Server

Introduction

 I don't know about anyone else, but I love having my GitHub / GitLab / BitBucket account, where I can store all of my code (whichever combination of those, or others you may use).  But, I find it handy to have my own local, self-hosted git server, where I can store my code, until I am at a point where I want it published up to one of those other sites.  Don't get me wrong, its not that I am not about sharing, but more about getting things to a usable point first.  Development takes time.   

Anywho, that is the 'gist' of this post..... what is involved in setting up your own, self-hosted git server, to store your code.  I could also note that this is super handy if you are also a bit paranoid.  Paranoia, to some degree, can be healthy.  Just don't let it get out of hand.  


My Setup

As I mentioned, I am running it on a Raspberry PI, but here are some further details on my setup:

  • Raspberry Pi 3 B+
  • A Raspberry Pi 3 mSata SSD Shield
  • A 256 Gb mSata ssd stick
  • A 16 Gb microSD card
  • A 7" touch screen display (all hooked up with the pi so I don't need a separate monitor)
I am using a 256 Gb drive, as I want to have enough room for whatever projects I decide to work on.  Also, its expandable, so not limited to just that size.  Depending on how much you plan on working on, adjust your ssd stick size.  


Setting Up Your Git Server

Considerations

You will need to work out where you are going to run this system.  It could be a spare system (desktop or laptop).... it could be in the cloud somewhere....  or, like me, you could set it up on a Raspberry Pi on your local network.  I find this handy, as its portable, should I need to take it with me on a trip, and it also uses a minute amount of power to run.   No matter what, the first thing you'll need to do is work out where to run it, before you can go to the install and setup. 

Another consideration is, that if you are using an mSata ssd, you'll need to set it up, format it and get it setup so it mounts at boot.   

Install the necessary software

So the first thing you will need to do is install git on your system.  This varies from system to system, so I am not going to get into that, but just make sure that git is installed.

Setup The Git User

If there is not a git user setup after the software install (there wasn't on my system), then you will want to do the following:
sudo adduser git   (provide whatever password you want, but you have to set something)
sudo -u git -i
mkdir .ssh && chmod 700 .ssh
touch .ssh/authorized_keys && chmod 600 .ssh/authorized_keys

 

Add Developer SSH Key(s) To The Git User's authorized_keys File

For each developer that you have using your system, you are going to have to add their public SSH key to the git users's ~/.ssh/authorized_keys file.  If you copy their key up to the /tmp directory, then you can do something like this, as root:

cat /tmp/userkey.pub >> ~git/.ssh/authorized_keys

Setting Up A New Repository

For each new repository that you wish to host on the server, you'll  need to create an empty repository to host the code that developer(s) will push.  You'll need to have a base directory that will house all of your repositories.  For instance, I have my ssd drive mounted at /data on my raspberry pi, and have a directory in there called git.  So I host all my repositories at /data/git.

To setup and empty repository for your code, do the following (I will give examplease in /data/git, where I host mine.  Feel free to change to whatever you are using):

cd /data/git
mkdir projectName.git
cd projectName.git
git init --bare
You will see something similar to this output after the git command is run:

Initialized empty Git repository in /srv/git/project.git/

Developer Setup To Push To Your New Repository

Now that you have your repository created and ready to receive code, all you need to do is have your developer point at it and push their code.  Here is how to point your repository at your new repo:

git remote add origin git@gitserver:/data/git/projectName.git

After they set that up, they just need to run:

git push origin master

 

Securing Your Setup

After you have gone through the trouble of setting up all of this, you are probably going to want to secure it, especially if you are sharing it with others.

Change The Shell For git User

NOTE:  Please know that once you do this, you can no longer just switch to the git user.  Anything you do withing the git user home directory will need to be as the root user.  So ensure you are ready to commit to this

The first thing you need to do is change the shell that is used by the git user to be git-shell.  git-shell is a special shell that comes with git, that only allows the git user to act for git related activities, and does not allow normal account shell access.  This is needed, as developer's ssh keys are in the git users' authorized_keys file, and they would otherwise be able to connect as git user, to your system.  This will prevent that. 

To change the shell for git user, run this:

sudo chsh git -s $(which git-shell)

Remove Port-Forwarding Ability For git User

The second thing you need to do to secure the server, is to remove the ability of users to get port-forwarded access, which would give them access to any host that can connect to the git server.  To remove this ability, you need to prepend the following text before the ssh-rsa portion of each and every developer key in the authorized_keys file:

no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty

 

After performing both of the above steps for security purposes, users will still be able to use git commands, but will be unable to get any shell on the system (or other systems).  


Some Notes

After you put the security precautions above into place, you will not be able to become the git user in order to create the new repositories and such.  If this is YOUR server, and nobody else will be on it, then you don't need to use the security precautions above, as you most likely aren't going to pwn your own data.   To be honest, if you're going to share with others, it would most likely be via one of the aforementioned services and not your own, self-manage git servers.  But if you do set something like this up and share with others at some point, make sure you trust them, or put the measures into place.  

It also worthwhile to note that scripting the repo creation and its modifications is probably a good idea.  Once I get that created, I will share them in another post.  

You'll notice that I did not get into any GUI interfaces, like Gitea, Gitlab, etc.  The reason is, I just wanted to provide guidance on setting up a plain git server for your code.  As such, no gui is needed for that.  If you want one of those, and decide to use one, there are a plethora of tutorials out there for installing and setting them up.   



Saturday, May 05, 2018

Setup WiFi to auto-connect regardless of logged in state (UPDATED)

There are plenty of times where I would like to start up my desktop machine (running Ubuntu 16.04 LTS) and just have it boot and me log into it via SSH so that I can get some work done.  Well, in order to achieve that wish, you need to have your wifi connection auto-connnect at boot time.

To do this, there are some things you need to do:

1.  Go to your wifi connection in the upper right of your screen and click on "Edit Connections".
2.  Once in there, you need to click on your connected wifi ssid and select "Edit".
3.  Go to the "General" tab and select "All users may connect to this network".  (without this, the rest of this is moot and it will not work)

Ok, that is the only gui portion of this setup.  Next:

4.  Create the file /etc/wpa_supplicant.conf and add the content that follows:

ctrl_interface=DIR=/var/run/wpa_supplicant group=wheel
network={
  ssid="your_connection_ssid"
  scan_ssid=1
  key_mgmt=WPA-PSK
  psk="password"
}
In the above code, you will input your ssid that you would like to connect to and also enter your password for the connection.  If you are using an encryption method other than WPA/WPA2, then you will need to reference the man page for wpa_supplicant.conf.  There are plenty of examples to help you out. 

After you have that file all setup, save it.

5.  Issue the following command on the command line, as root or with sudo:

wpa_supplicant -B -i wlp4s0 -c /etc/wpa_supplicant.conf

You should see a message that says something to the effect that wpa_supplicant initialization was successful.

After you have completed those steps, go ahead and reboot the machine and watch the login screen.  If all is correct and right with the configuration, you should see the wifi connection show as connected.  You can test this by ssh'ing to the machine.  Enjoy!


UPDATE:  I took the opportunity to reinstall my Linux system to go from Ubuntu 16.04 LTS to Ubuntu 18.04.  I know its a bit extreme to do a full re-install, but:

1.  An distribution upgrade wasn't yet available
2.  There wasn't anything I couldn't re-setup on the machine
3.  Ubuntu moved away from Unity to the Gnome desktop, so I wanted everything clean and fresh.

I will say this though, auto-connection of the wifi (seeing as how you tell it how to connect during installation) just WORKS with Ubuntu 18.04.  You just have to make sure to install the openssh-server package.

Friday, November 20, 2015

Creating A Rubygems Mirror On Ubuntu 14.10

So I guess it is apparent that I am on a bit of a kick creating mirrors.  This is because they are creating mirrors internally at work and I want to have a better understanding of how things work (and what is needed to puppetize things). 

Before we can get down to the nitty-gritty of creating the mirror, we have to do a bit of prerequisite work first.   To start with, I will tell you that I am working on a fresh installation of Ubuntu 14.10.  That disclosed, ruby comes installed by default.  Please know that I will be only referencing the

The first thing that we need to get installed is the ruby-dev package.  You can install that with the following command:
$ sudo apt-get install ruby-dev
After you install that, make sure that your system is up to date:
$ sudo apt-get update
$ sudo apt-get upgrade
Now, we need to get a couple of gems installed:
$ sudo gem install net-http-persistent
Next we need to install a whole slew of things (inluding git):
sudo apt-get install git-core curl zlib1g-dev build-essential libssl-dev libreadline-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt1-dev libcurl4-openssl-dev python-software-properties libffi-dev
Now it is time to install rbenv.  This tool will provide you the ability to install different versions of ruby (other than the default).  Its handy, so I am including it here for giggles.   Here are the steps to install rbenv (these should be done as you, not sudo/root):
$ git clone git://github.com/sstephenson/rbenv.git ~/.rbenv
- Add $HOME/.rbenv/bin to your PATH variable
- Add eval "$(rbenv init -)" to your .bashrc file
 Now you need to install the rubygems-mirror gem.  This is what is used to create the mirror:
$ sudo gem install rubygems-mirror
After that finishes, you will need to create the the '.mirrorrc' file:
$ vi ~/.gem/.mirrorrc
 In this file, you are going to specify where to grab the gems from and where to put them on your system:
---
- from: http://rubygems.org
  to: ~/path/to/put/the/files
You will need to make sure that you create the directory you specify in the "to:" portion.
After you save the .mirrorrc file and create the directory, you are ready to start mirroring:
$ gem mirror
This is going to take a while.  I ran it and it took several hours to download (and I have a fiber internet connection with 30Mb down).  So, once you start the download, you might as well go do something (watch a movie, read a book, hack on a Raspberry Pi, whatever you feel like.  Just monitor it once in a while to ensure it keeps going successfully.

Warning:  The download is many gigabytes and you are going to need some space for it.

That's it, you now have a mirror of Rubygems that you can reference in offline situations.  Stay tuned and I will put up a post on how to run a server that points to this mirror so that you can put it to good use.
 



Thursday, September 03, 2015

raspi-config Missing On Kali Linux 2.0

A couple of months ago I purchased one of the shiny new RaspberryPi2's (you know, the one with the 900 Mhz quad core chip and the 1Gb of on-board ram) and have been playing with it.   Within the last couple of weeks, Kali Linux 2.0 was released.  Seeing as how I have this sweet RaspberryPi laying around, I figured, why not get Kali 2.0 running on it. 

So, for my birthday, my awesome wife got me a Class 10, 32Gb MicroSD card.  So I promptly loaded it with the freshly downloaded Kali Linux 2.0 image and booted it.  After logging in to the machine I did a 'df -h' and discovered that instead of the 32 Gb I expected to see (ok, so it would be 30Gb after reserve), I was only seeing around 7Gb of space.   I hadn't run into this quandry before as I had only had an 8gb card before. 

So, I did some googling and found that you can finagle the partition table with fdisk (which isn't installed by default, btw.  You will need to install the 'afflib-tools' package in order to get fdisk installed) and then reboot and resize the root partition.  Or, as a friend pointed out, you can simply run raspi-config and it will quickly (and quietly) do it for you. 

So I searched around and what do you know..... no raspi-config.  Apparently they don't see a need for this extremely useful utility on Kali, so you will need to install it yourself, which, after doing it, wasn't that awful.

First, download the latest version of raspi-config from  'http://archive.raspberrypi.org/debian/pool/main/r/raspi-config'.  Just search in there and either click to download it, or, if you are like me, copy the link and wget it. Note: Remember to download the .deb file as this is a debian distribution.

Next, you need to install the two prerequisites for raspi-config:

  # apt-get install triggerhappy lua5.1

After you have installed those two, you can simply change directories to wherever you downloaded raspi-config and issue the following command:

  # dpkg -i .deb

After that, you should be able to run raspi-config.  The first option is to resize the partition to reclaim space on the card.  That is what you want.  It will tell you to reboot.  Once you do, voila!, space reclaimed and your 'df -h' should show a ton more space (relatively that is).

Wednesday, April 16, 2014

Add Services In Linux Using chkconfig

As a SysAdmin, you are responsible for everything to do with the systems under your control.  These responsibilities range from system installations to all kinds of maintenance and updates.  You may find, now and again, that you want to have something start up when the machine boots, and the best way to do that is to have an /etc/init.d script to do that for you.  In these scripts, you can define start, stop, restart options, with the commands you want to happen for each option.

To begin, a really handy command that you should get to know is chkconfig.  This command will show you whether a service is set to be on or off in each runlevel.  If you simply type the command with no options, you will see output like the following:

     httpd         0:off   1:off   2:on    3:on    4:on    5:on    6:off

As you can see, the runlevel's are shown, with the corresponding on/off option shown for each.  In order to add a service, you need to first create the script to execute and put it into the /etc/init.d directory.  If you have software that you have unpacked that has its own script already written, you can simply create a link to that script in the /etc/init.d directory.  Either way, once you have done that you can add it as an active service by issuing:

     chkconfig --add

The is the script or link name that you just put/created in /etc/init.d.  After adding it, check that it was successfully added by doing:

     chkconfig | grep

You should see the above output if the service was successfully added.  If you need to change a service to be on or off in a specific runlevel, then the format of the command is:

     chkconfig --level 345 httpd off

The numbers after the '--level' are the runlevels to modify.  As you can see, you can list whichever levels you want, but no spaces, commas, or anything else. After that is the service to modify, followed by whether it is off or on in the listed runlevels.

To remove a service, simply use:

     chkconfig --del

Again, check that it was removed per the above command.

The chkconfig command does have other options available to it, but this should give you a basic overview of how to use it.  If you wish to read further, please feel free to read the man page.

It is important to note that chkconfig does not exist on all systems and is typical on Red Hat based systems.  If you are on, say, Debian based machines (such as Ubuntu), then you will need to use 'update-rc.d'.

We will save that for another post.....

Removing Files Older Than So Many Days In Linux

On our own home systems, we tend not to run into the issue of files from this product or that product, building up and eating your disk space.  But, when you are dealing with servers and the software that people run on them, preserving space by deleting unnecessary logs and other files, is a necessary skill.

Just as an example, we use Puppet where I work to manage system configurations.  While the puppet logs tend to take up a bit of space on our puppet server after a while, its the puppet reports that end up eating the most space.  

The puppet reports are located in /var/lib/puppet/reports.  Under that directory is/are (potentially) a whole slew of directories, one for each machine that puppetizes off of that master.  In each of those directories are *.yaml files.  A yaml file is created each time puppet runs on a machine and connects to the puppet master.  

So what is the first step in purging the files?  Well, lets start by seeing how many files we are actually talking about.  To do this, you can use the find command:

   find /var/lib/puppet/reports *.yaml | wc -l

What that command does is search the reports directory for all yaml files.  It then reports the total cound of all files found.  Next, lets see how many files we are looking at getting rid of.  Let's say that we are going to keep the last 14 days of files.  For that we would simply modify the above command to be:

   find /var/lib/puppet/reports *.yaml -mtime +14 | wc -l

Again, it will report the total.  You will notice that the number is smaller than the previously reported number.  Now, if you are ready to remove those file, a simple modification will do that for you:

   find /var/lib/puppet/reports *.yaml -mtime +14 -exec rm {} \;

You have to just love the power of the command line in unix.  With just a few keystrokes, you can purge the unneeded files with a single command and a few options.  

Friday, April 04, 2014

Getting Back To Some Linux Basics

Yogi Berra is famous for many '-isms' which make people laugh.  One of those '-isms' that I like is:

       "Life is a learning experience, only if you learn"

One of the things I love about my job is that I learn something new every day.  While I have learned a lot over the years (thus far), I also am humble enough to know that there are so many things that I don't know.  

In the spirit of learning, l wanted to share with everyone a link to some Linux basic commands.  While Linux has a friendly GUI front end to it which makes it easy for someone new to the operating system to get around and get used to it, the true power of the operating system lies in the command line.  Some would say its a dying art, but I refuse to believe that.  There is so much you can do on the command line, faster than you can through a gui, that it will never truly go out of style.  

Here is a link to a nice set of Linux commands that not only give a quick overview, but they also give examples of the commands being used as well.  What was even nicer about the folks over on that site, they even created a pdf version of the page.  Please keep in mind though that the pdf version does not contain all of the examples that are linked to in the main page.  

That page (and accompanying pdf file) are only a small subset of the commands available in Linux.  If you are looking for a more complete reference, here is one that goes over the commands that are part of the Bash shell (one of the more popular shell environments in Linux).

And, last but certainly not least, for those of you who are anal enough (like me) to want a complete reference, here is a link to an online version of the Linux Complete Command Reference (pdf version).

Even if you don't make Linux part of your career, its beneficial for you to check it out.  Not only is it far more secure than Windows and none of the virus' effecting Windows based systems can effect Linux, but its also FREEEE!!!!!   I know, as my Dad asked me before I switched him over, "Where will I get support?"  My answer to him was "Me!".  To you, my answer is "Google!".  Its the best support reference I can give other than knowing someone who knows it.  

Enjoy!

Saturday, June 13, 2009

Wireless Printing - SUCCESS!!!

Up until today, the only printer we have is attached to my wife's Windows XP machine via its USB cable. If I wanted to print a document, which isn't all that often of a need here at home, I would just email the file to her and print it after saving it on her machine. That was a bit tedious, especially when she is on the computer and doing her thing. I don't really like to be a bother and disturb her.

Both of our machines are on the home wireless network and being a network, I wondered how difficult it would be to set up to print, wireless, to the printer attached to her machine. After doing a quick search on Google and a short bit or reading I did the following:

1. Set up file and printer sharing on the Windows XP machine. This was quite easy and walking through their wizard is recommended. Make sure that you have set up the following pieces of information:

Machine Name
Workgroup (I changed this from the default)

2. After setting up the sharing, I went to the specific printers settings and turned on its sharing. Make sure that you give the printer a name as you will need it.

3. I am using Ubuntu 9.04 so I went to System->Printing. Click on New. Once you do that, it will try and find machines. In the next box, I clicked on the Network Printer drop down and selected "Windows Printer via SAMBA".

In the options that appear to the right, you will need to enter the smb address in the following format:

smb://workgroup/machine_name/printer_name

After that, if you have authentication setup, you will need to enter it. I didn't, so I just hit verify and it said it found the printer and it was shared. Jackpot, so far so good. Next, click Forward.

4. Now is the search for drivers. You can do this blindly or be smart about it and do what I did. I have a Brother printer and the options you are provided at the top for selecting your printer are:

a. Select printer from database
b. Provide PPD file
c. Search for a printer driver to download

I figured I would search Google for a ppd file, so I entered the following into a Google search: brother dcp-7020 ppd file

Low and behold, the first link in the search was for the OpenPrinting Database. They basically told me all they knew about my printer and notes it had on it. The best piece though, was it told me which driver was recommended. So, I went to the Choose Driver screen, chose Brother and then Forward and on the next page selected the driver it recommended.

Wouldn't you know but it worked PERFECTLY!!! Isn't Linux just beautiful? :) I think (and know) so.

I hope this helps you setup your printer, wireless, on your own home network. I definitely see the value in print servers and network based printers, don't get me wrong, but when you are unemployed as I am and have no $$$, its awesome to be able to do what you need to, for free.

Monday, January 19, 2009

Easy way to combine pdf files

Don't you just love it when you find a good tutorial on something and instead of having 1 pdf file that contains everything, you find that each chapter has its own pdf file.  Its more of a pain to have several or more files than one big file.  

For instance, I am currently reading "How to think like a computer scientist", which is an excellent introduction into the world of Python programming.  This book, by the way is the textbook used for the MIT open courseware course "A Gentle Introduction to Programming Using Python".  The book is fabulous, but after downloading the material from MIT, I found that it contained PDF files of the books chapters, but not one with everything (which in my opinion would have been a lot more helpful). 

But, not to fret if you are using Linux (which I am, Ubuntu to be precise), as there is a program called Ghostscript which will make combining the files MUCH easier than going out and purchasing Adobe Acrobat (no need, believe me).

I had stumbled across this method on this website, which gave a very good overview of using ghostscript to combine a bunch of pdf files into one big pdf file. 

So, I set about to combine the 13 pdfs into one.  The following command was what I issued:

        gs -dBATCH -dNOPAUSE -q -sDEVICE=pdfwrite -sOutputFile=httlacs.pdf ch01.pdf ch02.pdf ... ch13.pdf

In this command the "..." symbolizes all of the files I listed.  Now please know that you have to list the files in the order that you want them combined.  The program will start at the first file on the left (ch01.pdf) and work through the list to the end (ch13.pdf).  When the command is done running (it took a couple of minutes to complete) I had a nice PDF by the name of httlacs.pdf that contained the entirety of the book.  Very nice and very easy if you ask me.

Hope this helps you!

Friday, October 24, 2008

The [sometimes] trouble with updates

No, I am not referring to Windows updates. Instead, I am referring to Linux updates. There are mixed views on whether you should do updates or not on your system(s). Here is my view.... if it is a server, update it ONLY when you are absolutely needing to. This means only if there is a security hole that is fixed by a newer version. On the other hand, if it is a desktop system, if you update then it is up to you.

If you do updates though, you really need to watch out. Make sure you know what exactly is happening during the update(s) as things tend to change and also stop working.

I did an update earlier this evening after booting up my laptop. No worries, right? Wrong. When I tried to start my apache web server (which was previously installed and working fine), I discovered that it was no longer working. Strange if you ask me, but it wasn't starting up. It kept giving me the same errors:

# /etc/init.d/apache2 start
* Starting web server apache2 [Fri Oct 24 22:15:10 2008] [warn] module php5_module is already loaded, skipping
(98)Address already in use: make_sock: could not bind to address 0.0.0.0:80
no listening sockets available, shutting down
Unable to open logs

I started by plugging in the different errors into Google, but kept going in directions I didn't think were the answer. Then I looked at the last error: "Unable to open logs". That was odd because I hadn't changed anything, but...... then there was the updates.

So I checked and the /var/log/apache2 directory and saw that it was owned by root and had a group of adm. I distinctly remember that being the group root the other day. So, instead of changing permissions (which is someting I was not wanting to do, I decided to add root to the adm group.

I restarted apache and VOILA!!! Problem fixed. So, the lesson.... know what updates are being applied and if something doesn't work afterwards, you will at least know why.

Reblog this post [with Zemanta]

Sunday, October 12, 2008

Linux Filesystem Layout

I was "stumbling" around and I stumbled across a page that showed a graphical layout of the Linux Filesystem. For those that are beginning in their Linux ventures, I can imagine that this will be quite helpful. Without an understanding of the filesystem, a lot of things will be lost in your learning process, so learn it well.
Reblog this post [with Zemanta]

Saturday, March 29, 2008

Realization

Its funny, when you have a love for something like an operating system (say, Solaris Unix), and you have been using it for about 10 years. Then, after a very long time, you get it installed on your laptop for the first time (or any x86 platform for that matter) and it doesn't take but a few days for you to realize that its not really suited for your daily activities.

That was the realization that I came to yesterday. Yes, I know, I put a fair amount of effort into getting some things worked out, but after playing around with it and actually trying to do my day to day stuff on the web, I realized that it really have the support that I needed for some things but Linux did.

Don't get me wrong, Solaris is still my OS of choice when it comes to being a server, but when it comes to the desktop, Linux has it hands down. So, I installed Suse 10.3 last night and have been getting things situated today.

Tuesday, November 28, 2006

Yet Another Novell Update

My last update on the Novell sell out was back on November 11th, where I elaborated on the facts of the situation and added my own commentary and thoughts. I have seen quite a number of posts on my local LUG mailing list surrounding Novell and Micro$oft and today was one that was completely blog-worthy.

Bruce Perens has written a letter from the Open Source community to Novell, in a calculated, well thought out response to Novell's "deal" with the Evil Giant that is Micro$oft. If you are as insanely upset with this deal as the rest of us, then please, read the letter and sign it because it is also a petition to Novell to say that none of us are pleased with what they have done.

Even if you don't sign it, the letter is definitely worth a read as it points out some interesting points. One of the most intersting is that even though companies like Micro$oft and SCO feel it necessary to sue Linux companies to get what they want (whether that be $$$ or control of what they cannot control (in Micro$oft's case)), patents exist, and there are so many patents related to the software industry that if those who own them constantly enforced them, the industry would come to a screeching halt. People like the aforementioned corporations are just using their patents as a strong arm technique.

I would say to any other companies, "Stand Your Ground!?.

Tuesday, November 07, 2006

And the sellout is..............NOVELL

I would say that the recent announcement by Novell of their new found partnership with Micro$oft has left me speechless, but that would be totally correct. In fact, I am more than a little up in arms over the deal. Micro$oft has made no attempt to hide their overwhelming dislike for Linux and all that it stands for, and even after seeing all of that over the years, Novell (Suse Linux) has gone and partnered with them.
All this is part of a deal so that Micro$oft wouldn't sue Novell for patent infringement over code they claim is in the Suse distribution. Kind of deja vou if you ask me with SCO's filings and lawsuits over quite similar claims. But the difference is, those that were sued by SCO stood up to them and WON!!

It makes me wonder what Novell is so incredibly scared of. Are the claims by Micro$oft valid? Does the questionable code actually exist in the distribution? I guess we won't find out any time soon considering they are now in bed with the big bad giant of the software world. I just hope that considering this loss to the Linux community, no other distributions are weak enough to fold to this pressure.

That's right, I called this a loss. Up to right now I have been really taken with Suse as a distribution. The fact that a lot the functionality that I was looking for ( ie: cd/dvd burning, cd/dvd image creation/ripping to name a couple) simply worked right out of the box has a lot to say about a distribution. Granted, not everything worked the way it was supposed to. I have been trying to learn Apache 2.0 better so that I can get the httpd.conf file setup correctly to support cgi scripting as it seems that the file I have by default differs greatly from the file that is default on other "working" systems, like Fedora. Also the directories where things, like Apache, are installed are different from the default in the software's documentation ( unlike that of distro's like Fedora which stuck to the vendor's defaults).

Those things aside, Suse is a generally good desktop OS, but now, with the advent of Micro$oft being involved with them, I give the distro two years. Two years and it will be almost non-existent, that is if Micro$oft has its way. I don't think they would make a deal like this unless there was a lot in it for them. Ballmer is to much of a Linux hater to make a deal that was beneficial to the Linux community.

If you look at the Novell website, you will see the title "Bridging the Divide" right on the front page. That is an extremely weak bridge, possibly made of doozer material. Personally, I will not be venturing across and I don't advise anyone else to either.

Meanwhile, over at the evil giant's website, there is NOT ONE MENTION of the fact that this deal ever happened. Of all the information on their main page, the deal with Novell isn't even hinted at. So, while Novell feels the need to announce the deal to the world, Micro$oft chooses the null approach. Makes you wonder how important it really is to them.

To Mr. Ballmer and his organization I say, "Enjoy Suse, its yours now". To the rest of the Linux community and to advocates like myself I say, "Let's sew up this would and go on. Its only a scratch and a scratch never killed a movement".

At this point, I am in the process of downloading the DVD of Fedora Core 6. As for my Suse 10.1 DVD and SLED 10 discs, well, I was looking for some good cup holders.

GO LINUX!!!
 
Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.