[PCLinuxOS] Manually upgrading Bind / Named to version 9.9.2-P2 [Security patches].

Hi folks.

Latest Bind / Named version was released several days ago to patch this vulnerability.

I will try to show how to download, extract, configure and install the latest version.

Open terminal window and follow this set of instructions:


root's password

export PREFIX=`echo /usr/`

export PATH=$PREFIX/bin:$PATH

export PKG_CONFIG_PATH=$PREFIX/lib/pkgconfig:$PREFIX/share/pkgconfig

cd /opt/

mkdir Bind

cd Bind

wget -c ftp://ftp.isc.org/isc/bind9/9.9.2-P2/bind-9.9.2-P2.tar.gz

tar xvzf ./bind-9.9.2-P2.tar.gz

cd bind-9.9.2-P2

./configure --prefix=$PREFIX --sysconfdir=/etc/

You can expect missing dependencies here. I had no problems whatsoever as I have a good few “devel” packages installed – try figuring out what You’re missing if You do run into a snag, then install it from Synaptic (without closing this window) and re-run the above configure step till there are no errors.


make install

ls --full /var/lib/named/var/

one of the listed items should look like this:

drwxr-xr-x 7 root root 4096 2013-03-22 09:08:02.163308440 +0100 named/

ls --full /var/lib/named/var/named

chown named:named /var/lib/named/var/named/

drwxr-xr-x 7 named named 4096 2013-03-22 09:08:08.221303100 +0100 named/

Now in this terminal window type in

named -v

the reply should look like this:

BIND 9.9.2-P2

service named restart

and the reply should look something like this:

Stopping named: [ Failed ]
Starting named: [ OK ]

This should be it… You have compiled and are running latest patched version of Bind…




Boy do I love sshfs… Mounting ssh / sftp share as a local drives.

Hi folks.

I have a machine that runs ssh server. That’s nothing new. Neither is it worth mentioning under normal circumstances… Recently I have purchased a 2 TB Western Digital MyBook USB 3.0 hard drive and I was going to use it to backup all my data. Why not make it a network shared drive I thought. It will make my life much easier if I could access the data from all my machines? Not a bad idea… I know… but I am not going to setup samba or nfs. I don’t want to make it a “network” drive. I want to have it mounted as a local drive on every machine that I use without a big fuss… How do I go about it?

I assume that You have drive attached to the ssh server running machine and that it’s mounted and that You have read and write permissions granted to Your user. I am using static IPs in my network – this is making things much easier for me as well.

In my case the drive is mounted on the server (IP and port 20202) as /media/1862_GB_X-Ternal/ and my user andrzejl is the only user that is allowed to write to and read from it.

Now it’s time to prep the client machine. It’s really simple…

I want to have my drive mounted on my ssh client machines in /media/1862_GB_X-Ternal/ folder but I want to mount it as user (andrzejl) – not as root.

First I had to open terminal and gain root’s privileges by issuing:


and giving a root’s password.

Next I had to create my mount point:

mkdir -p /media/1862_GB_X-Ternal/

and make andrzejl owner of it:

chown -Rf andrzejl:andrzejl /media/1862_GB_X-Ternal/

Now that I had the folder ready I needed a package that would allow me to work with sshfs / sftp file system so in the same terminal I ran:

apt-get install sshfs-fuse

After the package was downloaded and installed I could close this terminal window and open another one. I needed to drop the root’s privileges as I want to do the rest of this as a user.

The syntax of the command looks like this:

sshfs -p sshSERVERport loginTOtheSSHserver@IPorHOSTNAMEofTHEsshServer:/where/is/the/drive/mounted/on/the/server/ /where/to/mount/on/local/machine/

Now… if I will start filling the data in this command I get this:

sshfs -p 20202 andrzejl@ /media/1862_GB_X-Ternal/

After running this command and typing in the password (if You got the syntax right) You should find all Your data on Your ssh client machine mounted in /media/1862_GB_X-Ternal/ ready to be read and modify by Your user.


IF You want the data to be automounted at start up without typing in the password follow this post. Passwordless SSH authentication. Using authentication keys

You also need to create a mountsshfsshare.sh script in your ~/.config/autostart folder and make it executable.

Here is how I do it under KDE4.

Open terminal. Type in:

touch ~/.config/autostart/mountsshfsshare.sh

chmod +x ~/.config/autostart/mountsshfsshare.sh

echo "sshfs -p 20202 andrzejl@ /media/1862_GB_X-Ternal/" > ~/.config/autostart/mountsshfsshare.sh

Don’t forget to modify the sshfs line to suite Your needs.

Just to check run this:

cat ~/.config/autostart/mountsshfsshare.sh

It should spit out:

sshfs -p 20202 andrzejl@ /media/1862_GB_X-Ternal/

or whatever command You use to mount the sshfs share. Now You can reboot the ssh client machine for testing purposes. If You did everything properly – You will have a mounted drive waiting for You next time You boot up Your machine.

Edit 01: Sometimes .sh script will not work. Try creating .desktop file then instead.

Remove the .sh file first.

rm -f ~/.config/autostart/mountsshfsshare.sh

Now create the .desktop file.

touch ~/.config/autostart/mountsshfsshare.desktop

Now edit the file using Your favorite editor. I will use mcedit here. Paste this into it:

[Desktop Entry]
Comment[en_US]=Mount SSHFS automagically.
Comment=Mount SSHFS automagically.
Exec=sshfs -p 20202 andrzejl@ /media/1862_GB_X-Ternal/ &

Do not forget to change the sshfs line. Now save the file and reboot for testing.

Edit 02: IF a startup script nor a desktop file works for You add (as root using Your favorite text editor) lines like this at the end of Your /etc/rc.local file:

echo "Mounting SSHFS share as andrzejl"
su andrzejl -c "sshfs -p 20202 andrzejl@ /media/1862_GB_X-Ternal/ &"

Don’t forget to leave one empty line at the end of the file. Also You will need to modify the lines to Your needs of course.

I like this setup very much for a good few reasons. Here are just a few:

a) hard drive is being shared over the network but it feels and acts like a local drive
b) it’s not accessible by the windows machines without specific setup
c) it’s easy to setup permissions to the drive so only one user or group can have full access to the drive. You can have some folks to see the drive as read only while You keep the privileges to write to it.
d) like everything that runs via ssh the traffic between you and the hdd is encrypted



How to find all the empty folders inside a current folder using terminal? How to filter the output of the command to only show folders that name DOES NOT matches a certain pattern?


How to find all the empty folders inside a current folder using terminal? How to filter the output of the command to only show folders that name DOES NOT matches a certain pattern?

It’s simple:

find . -depth -type d -empty | grep -i -v -e "pattern"

You can filter out more then one pattern:

find . -depth -type d -empty | grep -i -v -e "pattern1" -e "pattern2" -e "pattern3" -e "pattern4"

This command will find all the empty folders in the current (.) folder and will grep (ignoring the UPPER or lower case) for names that DO NOT match the pattern word and will display only those names.



My screen is way to dark when booting to PCLinuxOS… What can I do?


So Your screen is normally bright but for whatever reason when You boot up to PCLinuxOS the brightness level is very low?

Try this:

1) Install xbacklight via synaptic
2) Open terminal and su to root
3) List the content of the folder /sys/class/backlight with this command:

ls --full /sys/class/backlight

4) You should get few hits:

[root@wishmacer backlight]# ls –full /sys/class/backlight
total 0
lrwxrwxrwx 1 root root 0 2012-11-04 02:10:05.023004946 +0000 acer-wmi -> ../../devices/platform/acer-wmi/backlight/acer-wmi/
lrwxrwxrwx 1 root root 0 2012-11-04 02:09:09.784000471 +0000 intel_backlight -> ../../devices/pci0000:00/0000:00:02.0/drm/card0/card0-LVDS-1/intel_backlight/
[root@wishmacer backlight]#

5) Now You need to get Your command right. We are gonna push the variable into the correct folder. The variable and the folder will be different on all machines but I think it’s safe to assume the variable values go from 1 to 15.

6) So knowing that let’s try the intel_backlight folder first:

echo -n 15 > /sys/class/backlight/intel_backlight/brightness

This however didn’t go so well.

[root@wishmacer backlight]# echo -n 15 > /sys/class/backlight/intel_backlight/brightness
bash: echo: write error: Invalid argument
[root@wishmacer backlight]#

7) So let’s try the acer-wmi folder:

echo -n 15 > /sys/class/backlight/acer-wmi/brightness

BINGO! Screen got bright.

8) Now that You know what folder/file to modify – try changing the variable from 15 and see if You get better results with other numbers. See if You can go 16 or 14 for example.

8) When Your command is ready open Your favorite text editor as root and modify the /etc/rc.local file by adding the command as a last line.

9) Save the file and reboot.

Hope this helps!



Irssi – Ignoring private messages from certain (annoying) people without ignoring their public messages on the channel.

Hi folks.

Some people don’t give a crappoli about the netiquette. They just do whatever they want to whenever they feel like it. The most common annoyance is PMing You out of the blue. I met a whole bunch of those pests in my days so I am gonna show You how I deal with them.

When connected to the server / channel type in:


Don’t forget to replace NICKNAME with the actual nickname of the person that keeps sending You private messages without asking.

Let’s say I want to ignore a guy with a nickname Troll. The command will look like this:

/ignore Troll MSGS

After running it my Status window will tell me:

14:51 Ignoring MSGS from Troll

This way You can still read what Troll wrote in the channel but all private messages from this person will be ignore…

Now let’s say Troll has matured and stopped acting like a fool and You had a change of heart:


will do the trick. Just remember to replace NICKNAME with the actual nickname.

After running:

/unignore Troll MSGS

Your status window will say:

14:52 -!- Irssi: Unignored Troll

I am pretty sure You will meet pests on IRC just like I did and I am pretty sure this command will come handy then.



How to verify signature using .sig file.

Hi folks.

Downloading something from the internet CAN be risky… It can be very risky. I am sure You have heard about bad guys hacking into the server of some project and replacing their original download content with something dodgy. Dodgy as in containing backdoor or something just as nasty…

There is a way to minimize the risk of getting exploited by the evil dudes… Many of the projects online that are aware of this security risk are signing their downloads. I am sure You have seen it. You are going to a ftp or http server and You find the file that You are looking for and another file next to it with the exactly same name but with the .sig extension. This .sig file is the signature. You need to verify it in order to make sure that the content that You have downloaded is what the project members wanted You to download and not some fake / infected crap.

How do we go about it?

It’s really simple.

Today I have downloaded Arch Linux iso that I will be testing so I will use it as a example.

First I went to the Arch Linux Downloads site and chose the mirror closest to me. Then I have copied the download links for the iso and sig files and wrote a short “script”.

wget -c http://ftp.heanet.ie/mirrors/ftp.archlinux.org/iso/2012.10.06/archlinux-2012.10.06-dual.iso && wget -c http://ftp.heanet.ie/mirrors/ftp.archlinux.org/iso/2012.10.06/archlinux-2012.10.06-dual.iso.sig

Next I wanted to verify the iso file using the .sig file so I ran:

gpg --verify ./archlinux-2012.10.06-dual.iso.sig

but I got an error:

gpg: Signature made Sat 06 Oct 2012 03:28:53 PM IST using RSA key ID 9741E8AC
gpg: Can’t check signature: public key not found

So I started searching for the info and after a lot of research I finally combined something that works…

First You need to download the public key that corresponds with the RSA key ID:

gpg --no-default-keyring --keyring vendors.gpg --keyserver pgp.mit.edu --recv-key RSA_key_ID

You need to replace the RSA_key_ID with the actual RSA key ID. You got it when the verification failed remember?

So in my case the command will look like this:

gpg --no-default-keyring --keyring vendors.gpg --keyserver pgp.mit.edu --recv-key 9741E8AC

And the output of the command looked like this:

gpg: requesting key 9741E8AC from hkp server pgp.mit.edu
gpg: /home/andrzejl/.gnupg/trustdb.gpg: trustdb created
gpg: key 9741E8AC: public key “Pierre Schmitz ” imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)

Now that You have this Pierre’s public key in Your vendors.gpg file we can try verifying the iso file again.

This time command looks slightly different:

gpg --verify --verbose --keyring vendors.gpg ./archlinux-2012.10.06-dual.iso.sig

gpg: assuming signed data in `./archlinux-2012.10.06-dual.iso’
gpg: Signature made Sat 06 Oct 2012 03:28:53 PM IST using RSA key ID 9741E8AC
gpg: using PGP trust model
gpg: Good signature from “Pierre Schmitz “
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: 4AA4 767B BC9C 4B1D 18AE 28B7 7F2D 434B 9741 E8AC
gpg: binary signature, digest algorithm SHA1

In this case the verification gave me a mixed signals… Good signature… Not certified with a trusted signature… I wasn’t sure – so just in case I popped into the #archlinux IRC channel and asked…

23:34 AndrzejL: md5sum
23:36 [andrzejl@wishmacer Arch]$ md5sum ./*
23:36 aefd90da1ee49c745101179f50afa783 ./archlinux-2012.10.06-dual.iso
23:36 b4fcd64607a532afe1880f609bbfd141 ./archlinux-2012.10.06-dual.iso.sig
23:38 AndrzejL: i just need the content of the .sig file to match
23:38 AndrzejL: seems to be matched to the md5sum.txt
23:40 ceezer: so i should be ok using those isos?
23:40 AndrzejL: yes.
23:40 AndrzejL: you should be

and the helpful crowd sorted me out.

I think that the HOWTO explains well enough how to verify the downloaded files (iso, gz, zip etc.) if sig file is provided and hope You will find it useful.



My 16 gigs Corsair Flash Voyager GT has died…

Hi folks.

My 16 gigs Corsair Flash Voyager GT has died… No biggie. I am not writing this to complain or cry out. It’s gonna be a happy ending story.

Sometime ago my other 16 gigs pendrive died on me too. It was long time after its warranty has expired. It was old. I had a spare one. No biggie. Why am I even mentioning it then? I am mentioning it simply because I want to mention the behavioral pattern. So the story is… It started few weeks before thumbdrive died completely. I had a video from a friends wedding copied onto the pendrive. I was watching it. All of the sudden SMPlayer closed – no errors – clean exit. I thought “What the hell…” and tried to play the video again. Well… No video to be played. And then I have noticed something far worse then missing video… “Holly crap where’s my pendrive…”. Yes. The dongle was not recognized by the system. I unplugged it and plugged it back in and everything worked fine again. I thought it was a USB port that was to blame. Maybe a software glitch. I remember thinking that maybe a motherboard of that lappy is going bad… Few days later I was watching some other video from this pendrive on another machine. Smplayer died again twice within 10 minutes… “Uhuh… that is not the usb / mobo problem…” I thought and I have copied all the data from the memory stick to he HDD on my main machine. I sensed the reaper coming after my old friend. After a while system was “loosing” the drive way to often – it became unreliable. I tried many things to recover it – nothing worked.

Last night I was watching a Ted.com talk from the Voyager and the SMPlayer closed. It closed again 20 minutes later… I know what’s going on and just finished copying data from the flash drive. I hear that Corsair has a great confidence in their products and they give long term warranty… 5 years or sometimes even lifetime… This pendrive is with me shorter then that… I went to the manufacturers site and reported a dying pendrive. I was told to send the Voyager to the Netherlands to be replaced. BUT… but… but… what about all my pr0n documents… I don’t want some curious dude at Corsair to be able to recover all my notes and photos and so on… How would I overwrite the drive with some useless random data that would make it harder or almost impossible to recover?

After a while of searching I have combined few commands for my convnience. Here they are:

Run these commands:


Now give it root’s password

Then run:

fdisk -l

That’s fdisk space dash lower case L.

This command will list all the hard drives available in Your system. Example:

[root@icsserver andrzejl]# fdisk -l

Disk /dev/sda: 40.0 GB, 40007761920 bytes
240 heads, 63 sectors/track, 5168 cylinders, total 78140160 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xef08263a

Device Boot Start End Blocks Id System
/dev/sda1 * 63 73392479 36696208+ 83 Linux
/dev/sda2 73392480 78140159 2373840 5 Extended
/dev/sda5 73392543 75479039 1043248+ 82 Linux swap / Solaris
/dev/sda6 75479103 78140159 1330528+ 83 Linux
[root@icsserver andrzejl]#

This machine for example has only one HDD /dev/sda and it’s 40 gigs.

Now once You have found the correct drive run this:

dd if=/dev/urandom of=/dev/sdx & pid=$!

Remember to replace x with a correct drive letter… DO NOT MAKE A MISTAKE. DD does not ask. DD writes. If You make as mistake of writing random strings to a wrong drive You are the only one to blame…

In my case it’s /dev/sde drive that I want to “randomize” ;).

[root@wishmacer andrzejl]# dd if=/dev/urandom of=/dev/sde & pid=$!
[1] 20951
[root@wishmacer andrzejl]#

It gives me a process id and then runs in a background. You can then check the progress by issuing command:

kill -USR1 $pid

The result will look somewhat like this:

[root@wishmacer andrzejl]# kill -USR1 $pid
[root@wishmacer andrzejl]# 10171578+0 records in
10171577+0 records out
5207847424 bytes (5.2 GB) copied, 2710.78 s, 1.9 MB/s
[root@wishmacer andrzejl]#

It spits out a pretty useful info.

Sometimes it may not give You a prompt. It will look like it froze. Don’t worry. Punch enter. Prompt is back ;).

It will take a longer while but once it’s done You will see something like this:

[root@wishmacer andrzejl]# dd: writing to `/dev/sde‘: No space left on device
31719425+0 records in
31719424+0 records out
16240345088 bytes (16 GB) copied, 8522.4 s, 1.9 MB/s

This means that the process has finished. This should be sufficient – data on Your HDD has been overwritten with “random” gibberish. IF You are paranoid and You want to make the recovery process even more difficult – run the dd command few times. You don’t have to format the disk or anything. Just re-run the command in the terminal. 5 – 10 times should do it.